LCOV per commit

Folks,

I was looking at LCOV (http://llvm.org/reports/coverage/) and it's
nice and all, but it doesn't have much information about which commit
is that and the difference between two commits. We could then have a
that report for every buildbot (check-all, test-suite, etc) for the
patches specific to the build, per architecture. How easy would be to
do that for any given buildbot?

Another potential project would be to get on a specific architecture,
patch by patch, and check how many of the *changed* lines are touched
by the current tests, including the ones added, for say check-all.
Since we hope to have good coverage on check-all, this should be a
good indication of how well tested is each patch, and could give us an
*additional* measure of quality.

Would anyone be interested in taking those projects? Shall I add them
to the list of ideas in http://llvm.org/OpenProjects.html?

cheers,
--renato

Folks,

I was looking at LCOV (http://llvm.org/reports/coverage/) and it's
nice and all, but it doesn't have much information about which commit
is that and the difference between two commits. We could then have a
that report for every buildbot (check-all, test-suite, etc) for the
patches specific to the build, per architecture. How easy would be to
do that for any given buildbot?

Another potential project would be to get on a specific architecture,
patch by patch, and check how many of the *changed* lines are touched
by the current tests, including the ones added, for say check-all.
Since we hope to have good coverage on check-all, this should be a
good indication of how well tested is each patch, and could give us an
*additional* measure of quality.

I'd love to have this. It's tiresome manually looking at patches/tests to
see if the error cases have been exercised, etc.

(of course all coverage based test quality assessment falls into the trap
of "exercised but not verified" which is why I'd also love the mutation
testing support that's been discussed on-list recently (possibly with the
domain restricted to the changed/added lines in the patch for a fast pass,
then a longer running pass that might catch knock-on effects elsewhere in
the code))

This might also wrap back around to the idea of running all the
target-independent regression tests against all compiled targets.
(currently we run them against the host target, but there's no reason we
can't run them on another host against any target we've built support for)

Would anyone be interested in taking those projects? Shall I add them
to the list of ideas in http://llvm.org/OpenProjects.html?

Seems reasonable.

I could not easily locate this on http://llvm.org/reports/coverage/ so
asking here: what workload is the coverage computed over? IOW, what
all does the bot run to get this coverage information?

-- Sanjoy

Nothing is clear in that page. No revision number, no arch, no test
names, nothing. Just a date.

If that could be introduced as a step in the buildbots, we could have
them run on all architectures and report the coverage in that page,
like LNT does.

The CMakeBuilder is used by almost all ARM/AArch64 and Windows
builders, we could potentially make it run for all archs, and adding
another step might be simple, depending on how that LCOV page works...

cheers,
--renato

I could not easily locate this on http://llvm.org/reports/coverage/ so
asking here: what workload is the coverage computed over? IOW, what
all does the bot run to get this coverage information?

Nothing is clear in that page. No revision number, no arch, no test
names, nothing. Just a date.

I am running it.
I don't have access to test names but I can try to add the revision number.
The arch is GNU/Linux 64 bit.

Joshua (as cc) worked on bringing better coverage to Thunderbird and
Firefox:
https://github.com/jcranmer/mozilla-coverage
Maybe I could hack with his tool to get the test information to
LLVM/Clang too.
For now, I haven't put too much effort into it as I wasn't sure how many
people
are using the code coverage (btw, is it possible to have stats on our
website?)

FYI, It is generated by this jenkins job:
http://llvm-jenkins.debian.net/job/llvm-toolchain-codecoverage-binaries/

I have plans to try https://coveralls.io/ to handle that. It should not
be too hard
as I already have most of the things enabled in this job.

The CMakeBuilder is used by almost all ARM/AArch64 and Windows
builders, we could potentially make it run for all archs, and adding
another step might be simple, depending on how that LCOV page works...

For now, it uses autotools and it is managed by the same workflow as the
packages
on llvm.org/apt/
Now, moving to cmake is just a matter of time.

Anyway, don't hesitate if you have other comments or suggestions.
Glad to know it interests folks, despite all it limitations.

Cheers,
Sylvestre

http://llvm.org/OpenProjects.html#coverage

cheers,
--renato

An example of the tool I wrote is publicly accessible at <https://www.tjhsst.edu/~jcranmer/m-ccov/&gt;, although it is a bit on the older side. I essentially ported lcov to Python, firstly since the lcov tool seemed to suddenly run achingly slowly, and speed is important when you're merging 100 test coverage files comprising ~1.8M counters. I also added a coverage treemap overview (<https://www.tjhsst.edu/~jcranmer/m-ccov/coverage\.html?dir=js&gt; is a link that won't kill your browser in animation), and you can view detailed code coverage of individual tests on individual files, a feature LCOV didn't have when I poked it.

The only reason the tooling doesn't support clang is because clang's code coverage support was fatally broken when I started the project, necessitating the need to use gcc for tests. The automated infrastructure for building Firefox is presently using a sufficiently old version of clang on OS X that the inability to get an instrumented version of Firefox on the automation infrastructure is probably due to inherently broken code coverage, and I've lacked any way to definitively debug the issue, so it ended up being a lower priority.

Hi Joshua,

This sure looks beautiful, but I think we need something a lot simpler
than that.

First, I really want to know if the tests we run regularly (check-all
and test-suite) do cover most of the code. So we don't need a
test-by-test report, just three states: check-all, test-suite, both.

Second, the area graph can hide small files that were never tested.
I'd like a report like LCOV (or your detailed report), but with the
option to sort by coverage, so you can focus on the problematic cases.

Third, your tool also doesn't seem to have a revision number, which
makes it difficult to compare commits or even know what features are
in or out.

Finally, a separate report, one that only includes the changed lines
for a specific commit, is the most important thing. This will tell you
if the tests that you committed do touch all the logic you have added,
and even if the existing tests are already exercising the patch, so a
new test would be redundant.

cheers,
--renato