Hi,
I broke the MLIR build yesterday and the two Flang bots told me about it pretty much right away. Yay!
That is how I always thought the setup should work (modulo that we all try not to break builds).
Today I got emails from an MLIR bot and I was a bit confused. I looked at the configuration of the two
MLIR bots and it seems they test commits one by one, with the backlog that you would expect.
I was wondering if my observation is correct and if this is the desired behavior?
I don't necessarily think such a setup is bad but both MLIR bots run it this way, which might catch
more problems but with a longer delay, unsure if it is worth it.
I figured I bring this up but I'm fine when people don't see the need
for change (or more bots).
~ Johannes
Indeed there is quite a backlog here right now: http://lab.llvm.org:8011/builders/mlir-windows and here http://lab.llvm.org:8011/builders/mlir-nvidia
I agree that 17 hours of latency is likely too high to justify the non-batching.
Note that the bots are doing ninja
first followed by ninja check-mlir
: they likely build much more than they need: the build could be faster by avoiding the first step.
+1 for batching. In practice it’s probably more important that things get run for every MLIR checkin, and not necessarily for every LLVM checkin.
Steve
Hi folks,
happy to set it to batch mode, if someone tells me where to configure it 
Otherwise we could also upgrade the machine from 16 to 32 cores, if you would like to get more build results. Or do both…
Hi Galina,
How can I set a builder to “batch mode”? I could not find any documentation or examples for that…
Hi Christian,
That’s the default mode.
The both builders explicitly defined ‘mergeRequests’: False to build individual commits.
I have changed that by https://github.com/llvm/llvm-zorg/commit/8d3a31cb12b51456e276a19baf6694cc44ff8c59.
Now they would group the commits if there is a waiting queue. This means larger blame lists, unfortunately. So, if you have resources to throw to these builders to make them faster, that would be great.
Thanks
Galina