I appreciate the value of the feature - but it's possible the test
doesn't pull its weight. Is the code that implements the feature
liable to failure/often touched? If it's pretty static/failure is
unlikely, possibly the time and flaky failures aren't worth the value
of possibly catching a low-chance bug.
I don't have many data points here. I haven't needed to touch the code
or tests since 2018. I haven't been asked to review any patches
related to the timeout feature since then so I presume nobody has
touched it since.
Another option might be to reduce how often/in which configurations
the test is run - LLVM_ENABLE_EXPENSIVE_CHECKS presumably only works
for code within LLVM itself, and not test cases - but maybe I'm wrong
there & this parameter could be used (& then the timing bumped up
quite a bit to try to make it much more reliable), or something
similar could be implemented at the lit check level?
Ah, compiler-rt tests use EXPENSIVE_CHECKS to disable certain tests:
./compiler-rt/test/lit.common.configured.in:set_default("expensive_checks",
@LLVM_ENABLE_EXPENSIVE_CHECKS_PYBOOL@)
./compiler-rt/test/fuzzer/large.test:UNSUPPORTED: expensive_checks
Could you bump the timeouts a fair bit and disable the tests except
under expensive checks?
Only running that test in the "expensive_checks" configuration kind of
abuses that configuration option because the test is actually not an
expensive check. It's just that the test is sensitive to the
performance of the running system. If the system is under heavy load
when the test is running it's more likely to fail.
One thing we could do to remove fragility in the test is to remove the
running of `short.py` in the test. This is only invoked to check that
it's possible for a command to run to completion in the presence of a
fixed timeout. If we can live without testing that part (i.e. we only
test that a timeout can be reached) then the test should be much more
robust.
If for some reason that's not enough another thing that might help
with this problem is to not run the lit tests at the same time as all
the other tests. The hope here is that the lit tests would run while
the system load is lower. We could do this by changing the build
system to
* Make all `check-*` (except `check-lit`) targets depend on the
`check-lit` target. This would ensure that lit gets tested before we
test everything else in LLVM.
* Remove the lit test directory from inclusion in all `check-*`
targets (except `check-lit`) so we don't test lit twice.
This also has a nice benefit of testing lit **before** we use it to
test everything else in LLVM. Today we don't do that, we just run all
the tests in one big lit invocation (i.e. we test lit at the same time
that we are using it). It does have some downsides though
* If any of the lit tests fail we won't run the rest of the tests so
you won't get the results of running the other tests until the lit
tests are fixed.
* When running any of the `check-*` targets (apart from `check-lit`)
you have to wait for the lit tests to run before any other tests run.
What do you think?