Evaluate and Expand the Module-Level Inliner

LLVM’s inliner is a bottom-up, strongly-connected component-level pass. This places limits on the order in which call sites are evaluated, which impacts the effectiveness of inlining.

We now have a functional Module Inliner, as result of GSoC2021 work.

We want to call site priority schemes, effectiveness/frequency of running function passes after successful inlinings, interplay with the ML inline advisor, to name a few areas of exploration.

1 Like

I am interested in this project. It would be really helpful if you can guide me.

Thank you for your interest in the module-level inliner.

As @mtrofin mentioned, we now have a functional module-level inliner where we inline callees in any priority order we choose instead of the traditional bottom-up style.

Last year, Liqiang Tao has laid the groundwork. Aside from the module inliner proper, he added the ability to choose what priority to use and did some performance experiments.

This year, I would like to see performance experiments involving profiling data. For example, here are types of questions we can try to tackle:

  • What happens if we inline callees in the descending order of the ratio of the callee’s profile count to the callee’s size?
  • Should we have some safeguard in place to avoid inlining too much in one area of the call graph?
  • Would it be useful to take into account changes to the prologue/epilogue size due to inlining? Given A->B->C (A calling B calling C), inlining C into B may increase the size of B’s prologue/epilogue. The additional pushes/pops in the new B could cause performance penalty when A calls the new B with C inlined into it. This can be especially problematic if B rarely calls C. (I have some code for this subproject.)
  • What other analysis can we add to the priority function to improve the performance? of the resulting binary?
  • If we inline all/most hot functions with the module inliner, do we still need to run the traditional bottom-up inliner at all? Or should we keep it to inline lukewarm and/or cold functions?

Please let me know if any of these sounds interesting to you. If you have other ideas, don’t hesitate to bring them up.

Thank you for the brief, I’m very interested in this project, are there any resources or papers available to read about this in further to detail, to gain more understanding?

ps: is this project still open? if so where should I apply to participate in this project?

Thank you for your interest. Here are a couple of sample papers:

Andersson discusses the use of a priority queue in deciding what functions to inline (and what order to do so in).

Prokopec et al discusses the top-down inlining for a Java JIT compiler. Aside from deciding which function to inline next, they talk about how to avoid excessive inlining in one particular area of the call graph.

I see that two of you are interested. Multiple people can certainly work on this project. Now that the module inliner infrastructure has landed, it serves as a playground where you can try different heuristics. I suggest you to write a proposal on what you would like to work on – partly to let us know your specific interest and partly to prevent excessive overlap with other people.

@mtrofin, would you mind giving us application links and other information on logistics? Thanks!

is there some slack or discord or email, where this can be discussed in detail?

There isn’t one, but please feel free to create a new thread.