But I also see another option, which someone else mentioned up-thread:
simply make only the regression tests be supported. Without a regression
test case that exhibits a bug, no reverts or other complaints. It would be
entirely up to the maintainer to find and reduce such test cases from any
failure of execution tests out-of-tree.IMHO, it's about transparency and commitment.
For experimental back-ends (like BPF), what you describe is perfectly
fine. If we wanted BPF to be official, I'd personally only accept it
if there was at least one buildbot with a minimal domain-specific set
of tests. In the BPF case, I'd expect a Linux booting and running some
arbitrary code expecting a certain result. In a hardware-simile, like
your back-end, I'd expect some generic code to be compiled and run
successfully, strongly biased to getting the test-suite running on it.
BPF backend has been the first class citizen for a year or so.
All major linux distros ship llvm with it turned on.
(unlike some other backends)
BPF buildbot processes more commits than arm64
buildername | completed
failed | time
clang-aarch64-lnt |
44 | 21 | 02:25:25
clang-atom-d525-fedora |
136 | 130 | 09:02:23
clang-atom-d525-fedora-rel |
146 | 117 | 01:57:06
clang-bpf-build |
311 | 32 | 00:03:29
clang-cmake-aarch64-42vma |
150 | 31 | 00:47:52
clang-cmake-aarch64-full |
50 | 15 | 03:38:37
clang-cmake-aarch64-quick |
162 | 36 | 00:43:11