New and more general Function Merging optimization for code size

Hi everyone,

I’m currently working on a new function merging optimization that is more general than the current LLVM function merging optimization that works only on identical functions.

I would like to know if the community has any interest in having a more powerful function merging optimization.

---- More Details ----

Up until now, I have been focusing on the quality of the code reduction.

Some preliminary result on SPEC’06 in a full LTO fashion:
The baseline has no function merge but the optimization pipeline is the same, and I am comparing my function merge with LLVM’s identical function merge, where everything else in the optimization pipeline is the same as the baseline.
Average reduction in the final exectuable file over the baseline: 5.55% compared to 0.49% of the identical merge.

Average reduction in the total number of instructions over the baseline: 7.04% compared to 0.47% of the identical merge.

The highest reduction on the executable file is of about 20% (both 429.mcf and 447.dealII) and the highest reduction on the total number of instructions is of about 37% (447.dealII).

It has an average slowdown of about 1%, but having no statistical difference from the baseline in most of the benchmarks in the SPEC’06 suite.

Because this new function merging technique is able to merge any pair of functions, except for a few restrictions, the exploration strategy is critical for having an acceptable compilation time.

At the moment I’m starting to focus more on simplifying the optimization and reducing the overhead in compilation time.
My optimization has an exploration threshold which can be tuned to trade-off compilation time overhead for a more aggressive merge.
It does not perform n^2 merge operations. Instead, I have a ranking strategy based on a similarity metric computed from the function’s “fingerprint”.
The threshold limits the exploration to focus on the top functions of the rank.
The idea is to make the ranking mechanism as lightweight as possible.

Cheers,

Rodrigo Rocha

Yes, I think there is certainly interest in this space. Can you explain in more detail how this works? (I’m also, as a side note, keeping my eye out for things like this that might also help us efficiently do more-compile-time-efficient SLP vetorization). -Hal

Hi Hal,

Because my function merging strategy is able to merge any two function, allowing for different CFGs, different parameters, etc.
I am unable to use just a simple hash value to compare whether or not two functions are similar.

Therefore, the idea is to have an infrastructure which allows me to compare whether or not two functions are similar without having traverse the two function (basically performing a merge for all pairs).
I’m precomputing a fingerprint of all functions, which is then cached for later use (this might also be useful to enable this function merging with ThinLTO).
At the moment, this fingerprint is just a map of opcode → number of occurrences in the function, which is just a ~64-int-array.

Then, for each functions being considered for a merge, I’m able to rank the candidates with a PriorityQueue.

Hopefully, we are able to do that in a very lightweight manner.

After that, the more expensive bit will be actually performing the merge and then checking for profitability, using the TTI for code-size.

I haven’t given much thought about adapting this infrastructure for the SLP Vectorizer, but perhaps something similar could also work there.

Cheers,

Rodrigo Rocha

Hi Hal,

Because my function merging strategy is able to merge any two function, allowing for different CFGs, different parameters, etc.
I am unable to use just a simple hash value to compare whether or not two functions are similar.

Can you give us more detail on what criteria you use for “fuzzy” merging? Jason Koenig had uploaded a prototype of something similar in 2015, based on mergefuncs.

Therefore, the idea is to have an infrastructure which allows me to compare whether or not two functions are similar without having traverse the two function (basically performing a merge for all pairs).
I’m precomputing a fingerprint of all functions, which is then cached for later use (this might also be useful to enable this function merging with ThinLTO).
At the moment, this fingerprint is just a map of opcode → number of occurrences in the function, which is just a ~64-int-array.

The difficulty with mergefuncs is keeping its comparator / hasher in sync with IR. All it takes is one new IR property to break it, and I think the only want to fix this issue is to have comparison / hashing be part of the IR definition. How would you solve this issue?

Then, for each functions being considered for a merge, I’m able to rank the candidates with a PriorityQueue.

Hopefully, we are able to do that in a very lightweight manner.

After that, the more expensive bit will be actually performing the merge and then checking for profitability, using the TTI for code-size.

I haven’t given much thought about adapting this infrastructure for the SLP Vectorizer, but perhaps something similar could also work there.

Cheers,

Rodrigo Rocha

Hi everyone,

I’m currently working on a new function merging optimization that is more general than the current LLVM function merging optimization that works only on identical functions.

I would like to know if the community has any interest in having a more powerful function merging optimization.

Yes, I think there is certainly interest in this space.

Yes.

I’d also be interested in hearing about how this combines with MachineOutliner. I expect they find some redundant things, but mostly help each other.

---- More Details ----

Up until now, I have been focusing on the quality of the code reduction.

Some preliminary result on SPEC’06 in a full LTO fashion:
The baseline has no function merge but the optimization pipeline is the same, and I am comparing my function merge with LLVM’s identical function merge, where everything else in the optimization pipeline is the same as the baseline.
Average reduction in the final exectuable file over the baseline: 5.55% compared to 0.49% of the identical merge.

Average reduction in the total number of instructions over the baseline: 7.04% compared to 0.47% of the identical merge.

IIRC this roughly matches Jason’s results on Chrome / Firefox: a few percentage point reduction. Can you try large applications like Chrome / Firefox / WebKit to get more real-world numbers? It’s interesting to compare what you get from regular builds as well as LTO builds (which will take forever, but expose much more duplication).

The highest reduction on the executable file is of about 20% (both 429.mcf and 447.dealII) and the highest reduction on the total number of instructions is of about 37% (447.dealII).

It has an average slowdown of about 1%, but having no statistical difference from the baseline in most of the benchmarks in the SPEC’06 suite.

Jason’s (uncommitted) work found speedups when compiling all of Chrome. The way he did this was with an early and fast mergefuncs which didn’t try to be fuzzy: it just tried to remove code duplication, which means the optimizer spent less time because there were fewer functions. He then had a later mergefuncs which did fuzzy matching and tried to pick up more things. Keeping it somewhat later is important because merging similar functions might make it less attractive to inlining (because the merged functions are now slightly more complex).

How does it compare to the new machine outliner pass in llvm?

https://www.youtube.com/watch?v=yorld-WSOeU
http://lists.llvm.org/pipermail/llvm-dev/2016-August/104170.html

  • Matthias

Thanks for the comments.

At the moment, I’m refactoring the code and also preparing a document describing the optimization in detail, which I’ll make available to everyone ASAP.
This will make easier for our discussion.

As I see it, function merging and function outlining (e.g., MachineOutliner) are trying to solve the same fundamental problem of redundant/repeated code. But, like many other optimizations, I don’t think they are exclusive. Instead, they can probably work together in collaboration.

Some quick facts about my optimization, and how it may differ from the MachineOutliner:

  • it works on the IR level, so it’s not affected by register allocation, etc.

  • if two functions are identical it produces a result very much like the existing MergeFunctions.

  • if the two functions are not identical, it tries to maximize the number of similar code merged,
    but it does not need to create one new function for each “block” of code merged.

  • it’s not limited to work within basic blocks, in fact, it is able to merge similar code even if they span across a different amount of basic blocks in each one of the functions being merged.

  • however, as it merges different functions, it is unable to find code duplication within the same function. Although I can see how it could be adapted to handle some cases of code duplication within a function. But in general, MachineOutliner would be valuable in these cases.

As I described the fingerprint of functions before, it is basically just a map of opcode to number of occurrences. It wouldn’t really be affected by changes in the IR.

The actual merge operation, on the other hand, needs to check the equivalence between instructions, which might be affected by changes in the IR. However, the same is true for the IR Verifier and the existing function merging of identical functions. It would be interesting to see if this can be simplified/unified to reduce the number of changes to the code when the IR changes.

The way I see it working on LTO would be something like: remove unnecessary functions, perform the quick identical function merging to remove straightforward replication, and then apply the more powerful function merging optimization.
As it was suggested by Jason’s work, the two first optimizations would potentially reduce the number of functions to be analysed, solving quickly the easy cases.

It is true that depending on how many functions are merged with respect to the total size of the code, we can observe reduction on the compilation-time, as now the remaining optimizations and the back-end has a smaller code to work with.

I think it would be interesting to test on large programs like the ones suggested.

Cheers,

Rodrigo Rocha