So, here at Arm, we are trying to cover the latest Clang and GCC.
Until now we have Clang8 to become Clang10, GCC10(shared_library) and GGC9.3.
NERSC added another buildbot for x86 machine with linux(it is not on production)
Our servers are also from Packet.net, but only arm64.
So the configurations that we would like to cover are:
1) Different servers/architectures (atm we only have X86 and Arm64)
2) Different type of build (by priority order):
Out-of-tree with the latest stable GCC if possible
Multiconfiguration, as it is a new feature with the latest cmake
A bootstrap (Trunk clang)
RHE LTS releases
Is it good that you are building on Ubuntu18.04, but as you use docker is it possible to build with other OS?
4) Different compiler versions:
It would be nice to test with the lots of different compiler versions, atm we only test with the latest clang and GCC.
I hope this gives you some guidance on what you can do with buildbot.
If you need more specific configuration let me know.
Thanks for all the replies!
I guess I'll start with the buildbots since there seems to be a lot of
interest. I'll look into adding one for the "out of tree" build.
@Carol: What other configurations would be interesting? We already
have buildbots running on lab.llvm . Our hardware is a 64-core
machine from Packet.net, where we just deploy different docker
containers for different buildbots. As far as I can see, the image we
use for them  is very similar to what you have on
flang-aarch64-ubuntu, except we use clang instead of gcc. This would
be different from flang-aarch64-ubuntu-clang only in that it uses
Ubuntu 18.04 instead of 16.04, and clang 8.0.1 instead of 8.0.0. So I
guess a one-stage build+check-all wouldn't be very revealing. Would it
be interesting to add flang to one of our existing two-stage bots
though, to track how it compiles with ToT clang?
@Richard: I currently have a job for 11.0.0-rc1 in progress with flang
enabled, if I manage to reproduce the documentation failures I'll have
a look. Otherwise, from a release perspective I think increasing the
amount of testing would be the most important next step. I already saw
a thread about adding the FCVS to the test-suite - I can run that on
my end for 11.0.0 and report the results.
 http://llvm.validation.linaro.org/ (This is just our monitoring
page, all the links lead back to lab.llvm.org)
PS: I'll be out of office on Monday.