What is the percentage of performance increase for test-suite, which can be considered worth patching?

Hi~ Guys,

Could anyone tell me what is the positive percentage of performance increase from llvm test-suite, which can be considered worth patching ?




Hi Min,

Your question assumes that an acceptance criterion for a patch is that the LLVM test-suite improves. That is incorrect; patches do not need to improve the test-suite in order to be accepted.

As a community we care about a very diverse range of workloads. Not all of those (and probably not most!) are represented in the test-suite. What the test-suite aims to do is cover a range of publicly available benchmarks and programs to attempt to give a more representative slice of the “real world” than some other benchmark suites do.

It is very often the case that workloads (be they real world applications or benchmarks) have a hot, dominating section of code that if it is optimized well, or in a specific way, the workload improves drastically. It is not expected that all these possible optimizations would improve tests in the test-suite. If they do - fantastic, but that’s not a blocker.

However, the test-suite can give an indication that a patch is not good in the general case. It has a more diverse set of workloads than any other suite I’ve come across, and so can provide a decent indication of how a patch would behave on “more real world” code than whatever workload you’re targetting. It can tell you if your patch even triggers in real world code, and can tell you if it regresses.

So regressing the test-suite may block your patch, but improving the test-suite is not a requirement.