For Clang Static Analyzer (CSA) and for Clang Tidy there are a lot of unit-tests. That’s great, but from my point of view running CSA and clang tidy regularly on some set of real life projects and comparing the results between runs is also important to check any regressions.
Now I know that for CSA there are set of useful scripts in ‘clang/utils/analyzer’ but didn’t find anything similar for clang-tidy.
My questions are:
Are there any buildbots which test CSA and/or Clang Tidy on some set of projects regularly? If there are not any buildbots - is it possible to setup it?
I didn’t find anything about regression testing in official documentation. I think a note about regression testing has to be added to the documentation: how it works now, how to it for your code, etc. Now we have just small note about running unit-tests - https://clang-analyzer.llvm.org/checker_dev_manual.html#testing
I did work towards a clang-tidy buildbot some time ago but unfortunatly had to stop because of time constraints.
My setup was(still is) along these lines:
- build latest clang-tidy as a docker image
- make a project-image, compile the project as it wants and create the “compile_commands.json”
- run each check-category over the whole project
- deduplicating diagnostics, as the output is massive for some project/check combinations (~GBs for one run)
- applying fixes
- checking if the project still compiles after fixing
git diff to be able to see what has been fixed and to find potential breaking changes
- optionally silencing some checks, like style-related stuff that is just too noisy.
I did run this over ~10 Projects (blender, opencv, llvm, curl, …) privatly while developing. In principle this buildbot works, but I would refine some parts,
especially moving towards the mono-repo. Improving how to access the diagnostics and the diff would help as well, I guess. Its kinda prototypish still.
If there is more interest I can publish my current work to github or so and we can setup a buildbot. I would definitly contribute with a worker machine,
and the docker-based approach should help setting up more workers for more projects.
The only obstacle I see personally, is the massive output some projects generate and how to make something useful out of it.
IMHO it should not run on the main buildbot we have, as for example “does still compile” is rarely the case when fixing is activated (unfortunatly), so the
easy ways to measure success are not feasable to use.
Best Regards, Jonas
Yeah, we should totally document our buildbot utils somewhere.
I believe the main reason why there aren’t any public buildbots is that most parties target their own code, i.e. they’re more interested in fixing false positives that hurt them specifically, and these overlap very poorly, given how dramatically different any different codebases usually are.
There’s a historical attempt at - technically it’s there but in practice we never got to growing it into anything more than an experimental prototype. Also, i heard that Green Dragon’s CPU power is relatively expensive, so we should go for a separate machine if we are to set up a public Static Analyzer reference results bot.