Status of the function merging pass?

Hi,

I’m interested in finding ways to reduce code size. LLVM’s MergeFunctions pass seems like a promising option, and I’m curious about its status in tree.

Enabling MergeFunctions gives a 1% code size reduction across the entire iOS shared cache (a collection of a few hundred system-critical DSO’s). The numbers are even more compelling for Swift code. In fact, the swift compiler enables MergeFunctions by default when optimizing, along with an even more aggressive merging pass which handles equivalence-modulo-constant-uses (https://github.com/apple/swift/blob/master/lib/LLVMPasses/LLVMMergeFunctions.cpp).

Is anyone actively working on enabling MergeFunctions in LLVM’s default pipelines? Is there a roadmap for doing so?

ISTM that preventing miscompiles when merging functions is a serious, unsolved problem. I.e., it’s hard for the MergeFunctions pass to be really sure that two functions are a) really identical and b) safe to merge.

Is there a systematic solution at the IR-level, given that the semantics of IR are subject to change? Is extensive testing the only solution? Or is this intractable, and the only safe approach is to perform merging post-regalloc (or, at some late point when equivalence is easier to determine)?

thanks,
vedant

(Disclaimer: I don’t know anything about MergeFunctions; I’m just assuming functionality from the name)

How does MergeFunctions compare to performing identical code folding (ICF) in the linker?

Yikes, Mail.app dropped a bunch of CC’s. I’ll try to re-add them now.

(Disclaimer: I don’t know anything about MergeFunctions; I’m just assuming functionality from the name)

How does MergeFunctions compare to performing identical code folding (ICF) in the linker?

IIUC there are a couple important differences:

1) Linker ICF tends to be fairly restricted. E.g. on Darwin, the linker only deduplicates if a) the compiler has marked the symbol “auto-hide”, b) the text of both symbols literally memcmp’s to 0, and c) they have exactly the same relocations. MergeFunctions results in substantial code size reductions even with linker deduplication enabled, probably because it can detect more kinds of equivalent functions.

2) MergeFunctions runs much earlier in the pipeline, so it can save compile-time (no need to optimize forwarding thunks)

vedant

The linker has access to a narrower set of informations, so it can
only merge byte-by-byte identical functions, while, in theory, you can
imagine that mergefunc could discover equivalences (through VN), and
understand that, e.g.

func pat(i32 %blah) {
  %0 = sub %blah, 1
   ret %0
}

and

func tinky(i32 %blah) {
  %0 = add %blah, 0
  ret %0
}

are equivalent. Some things to keep in mind:
1) My example is relatively silly, but some slightly more elaborate
patterns might show up in real code
2) I think llvm does enough canonicalization that we should be able to
catch these anyways
3) I don't think the pass as-is in tree does any equivalence finding,
but that's a potential advantage of running this analysis as IR pass.

Thanks,

In Rust we’ve been running with MergeFunctions enabled by default for a while now, and have recently also enabled the use of aliases instead of thunks. Apart from some initial bugs we didn’t encounter any significant issues (one minor issue with NVPTX not supporting aliases and having CC restrictions).

As Rust tends to be quite heavy on monomorphization, MergeFuncs can give significant binary size reductions. I don’t have any comprehensive numbers, but from checking this on a pet project just now, it reduces final artifact size by 13% and I’ve seen some similar numbers in the ~10% range quoted before.

So, at least for Rust’s use case this pass seems to be both quite robust and useful :slight_smile:

Regards,

Nikita

Hi Nikita,

Glad to hear that Rust code can benefit a lot from this.

I have put patches to enable merge-similar functions with thinLTO.

https://reviews.llvm.org/D52896 etc.

This is more powerful than existing merge-functions pass and all we need to do is port these patches to trunk llvm. I’d be happy to help with this effort.

-Aditya

Hi Nikita,

Glad to hear that Rust code can benefit a lot from this.

I have put patches to enable merge-similar functions with thinLTO.

https://reviews.llvm.org/D52896 etc.

This is more powerful than existing merge-functions pass and all we need to do is port these patches to trunk llvm. I’d be happy to help with this effort.

I still don’t understand why we should ditch mergefuncs instead of incrementally improving it. I’d like to understand what’s actually changing incrementally, and first fix the fundamental flaw mergefuncs has (as I discuss below).

To answer Vedant’s question: I think the fundamental problem with mergefuncs is that it needs to understand IR perfectly, for equality comparison, hashing, and “fuzzy” matching. Any solution that’s on-by-default should address this issue: when we change IR we cannot allow mergefuncs to suddenly be wrong in a subtle way. For example, when we added cmpxchg “failure” order, mergefuncs needed to know about it, otherwise it could merge functions which differed only in failure ordering and suddenly generate code that was wrong in an incredibly hard to diagnose manner.

Once that’s addressed, I think mergefuncs can be improved in a few ways.

First it can be run early to remove exact duplicates. This will speed up build times. I had an intern, Jason, work on mergefuncs a few years ago and he measured speedups when compiling Chrome just though an early run.

Then mergefuncs should be improved to do fuzzy matching, where it determines that functions are similar enough that they can be variants of each other with an extra argument passed in to specialize each “flavor”. Jason had posted a patch for this back then as well, and it yielded some gains on Chrome’s binary size. He hadn’t explored the full breadth of specializations (do you just find differences in constants, or branch around entire code blocks, etc). There’s extra science to perform around different optimization levels.

The fuzzy matching should only be run later, and some science should be put in determining how it interacts with inlining.

Vedant, I’m happy to chat in person next time we’re in the same building :slight_smile:

Adding back llvmdev; sorry about that. Recently had to switch mail clients, and Outlook’s reply-all button is broken.

-Eli

Adding back llvmdev; sorry about that. Recently had to switch mail clients, and Outlook’s reply-all button is broken.

-Eli

From: Eli Friedman
Sent: Thursday, January 31, 2019 5:35 PM
To: 'JF Bastien' <jfbastien@apple.com>; Aditya K <hiraditya@msn.com>; Vedant Kumar <vsk@apple.com>
Subject: RE: [EXT] Re: [llvm-dev] Status of the function merging pass?

Specifically on the question of making sure MergeFuncs doesn’t regress, I’m not sure what you think we can do.

Define IR in a manner in which individual IR nodes and their properties can’t diverge from how they’re compared / hashes / fuzzy matched. I hate to say it, but say through tablegen.

We have certain extension points which allow changes to IR in general without individually verifying each pass: metadata, and function attributes. Merging can just discard metadata, and refuse to merge functions with mismatched attributes. Similarly, we can add intrinsics which don’t have special semantics (essentially, intrinsics which can be treated as equivalent to function calls); we can also generally ignore those.

Agreed, that’s the conservative route. Though for function attributes I’d advocate some knowledge of the relevant ones to enable merging (the same way we merge constants with different visibilities already).

If anything else about the IR changes, it’s necessary to individually verify each pass to make sure they don’t make any invalid assumptions. We’ll inevitably make mistakes in that verification because our transformation passes aren’t proof-verified. There isn’t really anything about that which is specific to mergefuncs in particular.

Agreed, and historically we’ve failed at doing this correctly. What I propose above (typing IR definition to comparisons) makes it less error-prone.

There are maybe a few changes we could make that would make MergeFuncs more resistant to certain classes of IR changes. For example, we could require that all instructions store all information which isn’t an operand or metadata in a way that would allow MergeFuncs to retrieve it as an opaque blob. Or MergeFuncs could bail out if it sees an instruction with an unknown opcode. But it’s impossible to write an pass that will never need to be updated.

Agreed, I’m just saying that we can avoid the most frequent (and here hard to find) cases.

Hi Nikita,

Glad to hear that Rust code can benefit a lot from this.

I have put patches to enable merge-similar functions with thinLTO.

https://reviews.llvm.org/D52896 etc.

This is more powerful than existing merge-functions pass and all we need to do is port these patches to trunk llvm. I’d be happy to help with this effort.

At the risk of straying too far off-topic: I think this patch is interesting, but would need help to understand it better. Would you mind starting a new thread about it? Specifically, I’d like to know what the marginal benefit is of factoring out dissimilar instructions, versus simply factoring out dissimilar constants. Presumably there’s a compile-time vs. performance vs. code size tradeoff. I think this is worth digging into because equivalence-module-constant-uses merging was sufficient for Swift.

vedant

Hi Nikita,

Glad to hear that Rust code can benefit a lot from this.

I have put patches to enable merge-similar functions with thinLTO.

https://reviews.llvm.org/D52896 etc.

This is more powerful than existing merge-functions pass and all we need to do is port these patches to trunk llvm. I’d be happy to help with this effort.

I still don’t understand why we should ditch mergefuncs instead of incrementally improving it. I’d like to understand what’s actually changing incrementally, and first fix the fundamental flaw mergefuncs has (as I discuss below).

To answer Vedant’s question: I think the fundamental problem with mergefuncs is that it needs to understand IR perfectly, for equality comparison, hashing, and “fuzzy” matching. Any solution that’s on-by-default should address this issue: when we change IR we cannot allow mergefuncs to suddenly be wrong in a subtle way. For example, when we added cmpxchg “failure” order, mergefuncs needed to know about it, otherwise it could merge functions which differed only in failure ordering and suddenly generate code that was wrong in an incredibly hard to diagnose manner.

Yes, that’s exactly what I’m getting at. More examples: MergeFunctions miscompiled vararg functions and musttail calls up until very recently (llvm.org/PR40345, r351405 resp).

As Eli pointed out, the problem isn’t entirely avoidable. I think the interesting question here is whether (or, "how to”) make it more tractable.

Once that’s addressed, I think mergefuncs can be improved in a few ways.

First it can be run early to remove exact duplicates. This will speed up build times. I had an intern, Jason, work on mergefuncs a few years ago and he measured speedups when compiling Chrome just though an early run.

Nice!

Then mergefuncs should be improved to do fuzzy matching, where it determines that functions are similar enough that they can be variants of each other with an extra argument passed in to specialize each “flavor”. Jason had posted a patch for this back then as well, and it yielded some gains on Chrome’s binary size. He hadn’t explored the full breadth of specializations (do you just find differences in constants, or branch around entire code blocks, etc). There’s extra science to perform around different optimization levels.

The fuzzy matching should only be run later, and some science should be put in determining how it interacts with inlining.

Vedant, I’m happy to chat in person next time we’re in the same building :slight_smile:

That’d be nice :). At this point I’m still trying to wrap my head around how to put function comparison on sounder footing. Down the road, fuzzy matching seems like a promising area to work on.

vedant

Hi Nikita,

Glad to hear that Rust code can benefit a lot from this.

I have put patches to enable merge-similar functions with thinLTO.

https://reviews.llvm.org/D52896 etc.

I had some comments on several of these, but I don’t think they were addressed yet? One of my meta comments was that it would be better to have the patches be standalone increments on top of each other (they seemed to be split up somewhat arbitrarily when I looked before - maybe this has been fixed?), and link them in phab as parent/child revisions (so you can get a stack of them displayed in phab to see the relationships).
Thanks,
Teresa