Hi Anna,
Thank you for your kind reply.
1.)
Doing this automatically using a build-bot would of course be way more
convenient and effective than doing things manually. I was unaware
that you are currently setting this up.
We just started working on this a week ago and it has not been publicly discussed. (Perfect timing for your email:))
I could of course still do
something similar manually for projects that arent covered by the
build-bot, but maybe it would be a better idea to just add those to
the automatic process ?
The immediate goal of the bot is to determine regressions we introduce during development. It will be run daily, so adding too many projects to it might not be possible. In addition to this, it would be great if we could run the analyzer on a larger number of projects each month or so to catch the issues the bot did not catch. (You could still use the same scripts as our buildbot to automate the process.) It would also be useful to have someone file the bugs reporting the failures.

I ran the analyzer on a few projects a way back, but as a non-dev I
didnt immediately know to turn that data into something useful (until
I came up with the idea to compare results from different versions of
the analyzer). The open source projects I tried were: bind, dhcp, gcc,
gdb, glib, ntp, openldap, openssl, postfix. Some of those might be
good candidates for the build-bot ?
Definitely.
2.)
It's not that im not interested in determining which results are
indeed bugs, which are false positives, and maybe even were the false
negatives are... It's just that I dont have the required skills to do
so. Which in turn made me wonder in what ways I could help out as a
non-programmer, and resulted in the email below.
I understand. Thanks for the interest!
Doing this
automatically would of course be preferable.
Automating this is difficult (impossible in some cases). Think about it this way: the analyzer is an automated process which cannot determine if these are real bugs or not. If we knew how to automate this, we would add the logic to the analyzer.
3.)
Doing something like Coverity's open source scan results would indeed
be the holy-grail. 
But as a start, it might be a lot easier, but still helpful, if just
the reports would be published on some publicly accessible web server
?
What would these reports look like? I think it would be valuable to list which projects have been successfully analyzed and how many bugs have been found in each one. Going back to the HTML reports themselves, we could list them, but I am not 100% sure on how they'll be used by others. One scenario is that a project developer would try to investigate a reported bug and either fix it or mark it as a false positive.. (Again, would be great to provide the interface for feedback.)
Developing an interface to display analysis results would be useful even for providing feedback about the projects we analyze on our buildbot.