serialized, low-load test pass in parallel test runner

Hi all,

On OS X (and frankly on Linux sometimes as well, but predominently OS X), we have tests that will sometimes fail when under significant load (e.g. running the concurrent test suite, exacerbated if we crank up the number of threads, but bad enough if we run at “number of concurrent workers == number of logical cores”).

I’m planning on adding a serialized, one-worker-only phase to the end of the concurrent test run, where the load is much lighter since only one worker will be processing at that phase. Then, for tests that fail in the first run, I’d re-run them in the serialized, single worker test run phase. On the OS X side, this would eliminate a significant number of test failures that are both hard to diagnose and hard to justify spending significant amounts of time on in the short run. (There’s a whole other conversation to have about fixing them for real, i.e. working through all the race and/or faulty test logic assumptions that are stressed to the max under heavier load, but practically speaking, there are so many of them that this is going to be impractical to address in the short/mid term.).

My question to all of you is if we’d want this functionality in top of tree llvm.org lldb. If not, I’ll do it in one of our branches. If so, we can talk about possibly having a category or some other mechanism if we want to mark those tests that are eligible to be run in the follow-up serialized, low-load pass. Up front I was just going to allow any test to fall into that bucket. The one benefit to having it in top of tree llvm.org is that, once I enable test reporting on the green dragon public llvm.org OS X LLDB builder, that builder will be able to take advantage of this, and will most certainly tag fewer changes as breaking a test (in the case where the test is just one of the many that fail under high load).

Let me know your thoughts either way.

Thanks!

Note this is similar to the flakey test mechanism, with the primary difference being that the re-run is done in a minimal CPU load environment rather than wherever the failure first occurred. The existing flakey test rerun logic is not helpful for the high-load-induced failures that I’m looking to handle.

Chromium’s test framework uses the same technique. It has the potential to really slow things down if you have a lot of failing tests. You might want some kind of threshold for giving up, I.e. here’s 50 failures, I’ll stop running the rest so devs see results sooner.

Otherwise, yeah, this seems reasonable for lldb.

I think it sounds like something that would be useful in general. I'd
even go a step further and say that we can replace the current flakey
test mechanism with your proposed solution. If we do that (remove the
current flakey mechanism when this is in place), then I think it would
be super-great as we don't increase the number of moving parts and we
can think of this as just an upgrade of an inferior solution (the
current flakey mechanism has always felt like a hack to me) with a
better one.

If you want to automatically re-run tests, then we can have a mode
that does that, but I'd like to have it off by default. I have several
reasons for this:
- you get to feel bad for having to add flakey decorators, which may
encourage you to fix things
- if you make a change (hopefully only locally :slight_smile: ) which breaks a lot
of tests, you want this to fail quickly instead of waiting for reruns
- if you make a change that makes things flakey (!), you may not
actually notice it because of the reruns

cheers,
pl

Thanks, Reid and Pavel!

Chromium's test framework uses the same technique. It has the potential to
really slow things down if you have a lot of failing tests. You might want
some kind of threshold for giving up, I.e. here's 50 failures, I'll stop
running the rest so devs see

results sooner.

Otherwise, yeah, this seems reasonable for lldb.

Thanks, Reid! The max failure threshold seems like a good idea if/when I
put in an auto re-run-under-low-load mechanism.

I think I will start with that (the auto mode + threshold) to get it up and
running, then add the test markers and the required opt-in mode that we'll
default with per Pavel's comments.

-Todd