We run lit tests at Google using a custom runner on a distributed build system similar to Bazel.
In particular we run most of the llvm-project tests both when pulling in upstream revisions, and for any change to our internal repository that touches nearby files.
I wanted to share some of our experiences in case they’re useful, and in the hope that this project may result in something we can use too
I’m being brief here, but happy to provide more details.
Our build system wants to run each test in isolation (separate process, sandboxed).
Making each test hermetic separates concerns nicely (the same distributed runner is used for all kinds of testing, not just lit).
This model is also easier to fit into other containers (e.g. I imagine Ninja could make a good local test driver).
Compared to e.g. a custom driver that talks to a custom worker server that runs many tests per subprocess… there’s not very much of that we would be able to reuse.
I know there are OSS Bazel projects that want to run lit tests that would struggle with this model too.
The biggest problem with using the standard lit tool for hermetic tests is it was too slow to start to run a single test.
Fundamentally the slow parts are the config system, and init of python programs.
We had a greatly simplified time with the config system, because test (mostly) in a single config, so we could flatten it out into a list of features and substitutions.
But in a more general system, if we can produce the config data from config logic as a build step, then it can be cached in the usual way and simply fed into each test.
You’ll need to untangle config specific to the machine running the test from config specific to the machine driving the tests.
I wrote a hermetic test runner in Go - not my favorite language but it starts up fast and has good subprocess support.
It’s greatly simplifying to be able to assume you can fork a real shell and only limited state (CWD, exported vars) can leak from one RUN line to the next, this works fine for us in practice (but we don’t test on windows).
It has some nice features like printing a transcript of the test run, highlighting directives and stderr output, showing pre/post expansion lines, annotating each line with the result.
I should be able to share the code of this, it’s nothing terribly surprising.
It’s less than 1000LOC and runs almost all LLVM tests - IMO it would be worthwhile to keep the lit spec very simple and removing some of the marginal features that have crept in over the years. We chose to simply drop some tests rather than deal with all the corners.
(Before this existed, we ran sed over the lit tests to turn them into shell scripts, which worked but was hard to maintain and to read the output on failure… actually the upstream lit runner has the latter problem too!)
I’m sure I’ve forgotten things, but I think those were my biggest takeaways. Needing to solve the config problem + the go dependency were the main reasons I didn’t push to make these changes upstream
Hope this is useful or maybe at least interesting