Coverity vs Clang Static analyzer

Hello,

Synopsys has decided to acquire. We are a direct competitor of Synopsys and have been using coverity. As I work for a direct competitor, I believe from now on, we will have to decrease our dependence on Coverity.

Can someone please comment on features of Clang static analyzer vs Coverity? Does coverity catch any extra errors or can we just do a drop-in replacement.?

Thanks
GRR

Hi,

Actually, one of the main feature I like in Coverity is cross module analysis.

Getting a warning when you pass a potentially null parameter to a method that try to dereference it unconditionally help me to catch a lot of issues.

When using clang, I also miss some simple warnings like uninitialized C++ members in constructor.

Hello,

Synopsys has decided to acquire. We are a direct competitor of Synopsys
and have been using coverity. As I work for a direct competitor, I believe
from now on, we will have to decrease our dependence on Coverity.

Can someone please comment on features of Clang static analyzer vs
Coverity? Does coverity catch any extra errors or can we just do a drop-in
replacement.?

Which language are you interested in? (C, Obj-C or C++)?

Hi,

We use Coverity, Clang, CPPCheck, PC-Lint. Each of these program reports
different errors, intersection is almost empty. Coverity can find the most
"real" runtime problems, false positive rate depends on aggressity level.

For bugtracking we redirect all reports/outputs to the SonarQube

Mira

Hi All,
Thank you for your responses. I get a feeling that clang can do a lot of things that Coverity does, so switching to it may not be a problem.

Manuel,

We are using it for C++.

Hi All,
Thank you for your responses. I get a feeling that clang can do a lot of
things that Coverity does, so switching to it may not be a problem.

Manuel,
We are using it for C++.

I'd say C++ is still the weak part of the analyzer (your milage might vary
depending on how "C++" your code base actually is). We currently get > 50%
false positives (on the Chromium code base). If you're interested in
helping with a solution, I can point you at the bugs to start (we've found
mainly one hairy bug that's left over - correct tracking of destructors of
temporaries).

Cheers,
/Manuel

Manuel,

Thanks for the information. I would love to be able to help… Do let me know some bugs I should start looking at.

50% false positives implies there is lots of scope for improvement. Our code base is mostly C++ with liberal usage of templates and C++11 features.

Thanks

GRR

+Alex McCarthy, who has recently started to invest some cycles
+Daniel Connelly, who has done the stats with the current static analyzer for chromium
+Ted & Jordan, to correct me when I say something wrong :wink:

Hello,

Manuel,
Thanks for the information. I would love to be able to help... Do let
me know some bugs I should start looking at.

50% false positives implies there is lots of scope for improvement.
Our code base is mostly C++ with liberal usage of templates and C++11
features.

Manuel will probably have more concrete bugs but this search:
http://llvm.org/bugs/buglist.cgi?quicksearch=scan-build&list_id=52006
already shows plenty of them.
Just look for "false positive" (please note that we might have some
duplicate).

FYI, I am running once a day scan-build on llvm toolchain (llvm + clang
+ lldb + compiler-rt + polly) base code:
http://buildd-clang.debian.net/scan-build/
If you want to see on a real base code the advantages / drawbacks.

Sylvestre

Thanks for roping us in. C++ codebases are indeed still fairly unpredictable for the analyzer: on some it will do just fine, on some it will have a few false positives, and on some (like Chromium) it pretty much fails to understand the intent of the code. Manuel has pointed to the largest issue on the table, but there are probably easier places to get started. Syvestre’s right that the Bugzilla database isn’t a bad place to start; for a discussion of analyzer internals, I recommend the talk linked from our Checker Developer Manual page.

On a higher-level note, a big difference between Coverity’s tools and our analyzer is that ours (currently) only looks at one translation unit at a time. This means that Coverity’s tool will find some interprocedural bugs that ours cannot. This is something we’d like to improve in the long run, but it would be a big new effort, not an obvious extension to what’s already there, and we’d want to make sure it was still performant.

People have also noted Coverity’s integrated issue tracking; our support for this is rudimentary at best. The most common way to silence false positives with our analyzer is to use assertions.

(I appreciated seeing everyone’s feedback, and am glad to see our tool is still useful even when up against commercial offerings.)

Jordan