Sorry, somehow hit a send button too soon. Please ignore the previous e-mail.
The bot does 10 runs for each of the benchmarks (those dots in the logs are meaningful). We can increase the number of runs if proven that this would significantly increase the accuracy. I didn’t see the increase in accuracy when have been staging the bot, which would justify the extra time and larger gaps between the tested revisions. 10 runs seems give a good balance. But I’m open for suggestions.
It seems the statistics are quite stable if you would look over number of revisions.
And in this particular case the picture seems quite clear.
At http://lnt.llvm.org/db_default/v4/link/104, the list of Performance Regressions suggests that the most hit was with the linux-kernel. The regressed metrics - branches, branch-misses, instructions, cycles, seconds-elapsed, task-clock. Some other benchmarks shows regressions in branches and branch-misses, some shows improvements.
The metrics are consistent before and after the commit, so, I do not think this one is an outliner.
For example, if one would take a look at the linux-kernel branches - http://lnt.llvm.org/db_default/v4/link/graph?plot.0=1.12.2&highlight_run=104, it gets obvious that the number of branches increased significantly as a result of the r325313. The metric is very stable around the impacted commit and does not go down after. The branch-misses is more volatile, but still consistently shows the regression as the result of this commit.
Now someone should see why this particular commit has resulted in significant increase of branching with the Linux Kernel.
As of how to use LNT web UI, I’m sure you have checked that, but, just in case, here is the link to the LNT doc - http://llvm.org/docs/lnt/contents.html.
task-clock results are available for “linux-kernel” and “llvm-as-fsds” only and all other
tests has blank field. Should it mean there was no noticable difference in results ?
If you would go to http://lnt.llvm.org/db_default/v4/link/104#task-clock (or go to http://lnt.llvm.org/db_default/v4/link/104 and select the task-clock on the left, which is the same), you would see the list of actual values in the “Current” column. All of them populated, none is blank. The column “%” contains the difference from the previous run in percents, or dash for no measured difference.
Also, “Graph” and “Matrix” buttons whatever they should do show errors atm.
I guess you didn’t select what to graph or what to show as a matrix, did you?
Besides reporting to the lnt.llvm.org, each build contains in the log all the reported data, so you could process it whatever you want and find helpful.
Hope this helps.