Hello!
I can't see cfe-users mailinglist so i assume cfe-dev should be good
place for user to ask question about clang usage. I hope i'm right.
I'm thinking about creating automatic tool to create clang analyse every
commit/every period of time. I'd like to do comparising beetwen reports
to know if new commits brings something unwanted in code. Ideally it
would be to see what errors was fixed and what wrong appears with new
commit. But id would be enough to see new warnings. I've got no idea how
to parse html to create usefull diff.
Is known any tool to create something like this?
Regards,
Marcin
Hi Marcin,
Output of which tool do you want to diff? Are you asking about clang static analyzer(http://clang-analyzer.llvm.org/) or something else?
Anna.
Hi Marcin,
Hi Anna!
Output of which tool do you want to diff? Are you asking about clang
static analyzer(http://clang-analyzer.llvm.org/ [2]) or something else?
Yes, i meant clang static analyzer (scan-build), i forgot to mention, sorry.
Marcin
Hello!
I'm thinking about creating automatic tool to create clang analyse every
commit/every period of time. I'd like to do comparising beetwen reports
to know if new commits brings something unwanted in code. Ideally it
would be to see what errors was fixed and what wrong appears with new
commit. But id would be enough to see new warnings. I've got no idea how
to parse html to create usefull diff.
The clang analyzer can output plist files in addition to html. We have a python script which compares two plist files (See clang/utils/analyzer/CmpRuns.py). I've also started working on infrastructure to diff the analyzer warnings between different scan-build executions on the same project. We use this on an internal analyzer buildbot, which tests the analyzer using the same revision of analyzed projects.
The main issue is that we just compare the issues by file and line number, so if you have a change unrelated to a warning but in a file which does have a warning, the diff might change. This greatly limits the new issue detection; makes it noisy to the point it might not be very useful. So currently, the best workflow for a changing codebase is to ensure that the project does not have any analyzer issues and fix new issues as soon as they are reported.
We are very much interested in supporting the workflow you are talking about. The main challenge here is coming up with a smarter way of comparing issues and of cause implementing it. We envision this requiring support on the analyzer side.
Cheers,
Anna.
W dniu 22.05.2012 19:25, Anna Zaks pisze:
Hello!
I'm thinking about creating automatic tool to create clang analyse every
commit/every period of time. I'd like to do comparising beetwen reports
to know if new commits brings something unwanted in code. Ideally it
would be to see what errors was fixed and what wrong appears with new
commit. But id would be enough to see new warnings. I've got no idea how
to parse html to create usefull diff.
Hi Anna!
The clang analyzer can output plist files in addition to html. We have a python script which compares two plist files (See clang/utils/analyzer/CmpRuns.py). I've also started working on infrastructure to diff the analyzer warnings between different scan-build executions on the same project. We use this on an internal analyzer buildbot, which tests the analyzer using the same revision of analyzed projects.
The main issue is that we just compare the issues by file and line number, so if you have a change unrelated to a warning but in a file which does have a warning, the diff might change. This greatly limits the new issue detection; makes it noisy to the point it might not be very useful. So currently, the best workflow for a changing codebase is to ensure that the project does not have any analyzer issues and fix new issues as soon as they are reported.
We are very much interested in supporting the workflow you are talking about. The main challenge here is coming up with a smarter way of comparing issues and of cause implementing it. We envision this requiring support on the analyzer side.
Fixing all issues is great but often is too hard:) Thank you for all
information. CmpRuns.py is quiete usefull, it's easier to check 20
issues than 200.
Regards,
Marcin
W dniu 22.05.2012 19:25, Anna Zaks pisze:
Hello!
I'm thinking about creating automatic tool to create clang analyse every
commit/every period of time. I'd like to do comparising beetwen reports
to know if new commits brings something unwanted in code. Ideally it
would be to see what errors was fixed and what wrong appears with new
commit. But id would be enough to see new warnings. I've got no idea how
to parse html to create usefull diff.Hi Anna!
The clang analyzer can output plist files in addition to html. We have a python script which compares two plist files (See clang/utils/analyzer/CmpRuns.py). I've also started working on infrastructure to diff the analyzer warnings between different scan-build executions on the same project. We use this on an internal analyzer buildbot, which tests the analyzer using the same revision of analyzed projects.
The main issue is that we just compare the issues by file and line number, so if you have a change unrelated to a warning but in a file which does have a warning, the diff might change. This greatly limits the new issue detection; makes it noisy to the point it might not be very useful. So currently, the best workflow for a changing codebase is to ensure that the project does not have any analyzer issues and fix new issues as soon as they are reported.
We are very much interested in supporting the workflow you are talking about. The main challenge here is coming up with a smarter way of comparing issues and of cause implementing it. We envision this requiring support on the analyzer side.
Fixing all issues is great but often is too hard:) Thank you for all
information. CmpRuns.py is quiete usefull, it's easier to check 20
issues than 200.
You can also take a look at SATest*.py, which use CmpRuns to compare results of two scan-build executions.
I also want to point that due to the mentioned limitations, we might not support the existing scripts, output formats forever.
Cheers,
Anna.
Perhaps you might have a look at how FindBugs! heuristically tracks issues as code changes. It's not perfect, but it works well enough most of the time. I believe that there's some discussion of the issue in some of their published papers. If not, well, it's all open source.
Dean F. Sutherland