Machine learning approaches for optimizing inlining constants in Clang


Now Clang’s (GCC too) inliner is not so clever and relate on
set of predefined constants. I
think these constants are not optimal in many cases. I can say more - in
Clang sources you can find a FIXME comment about “We should base our
constants for inliner on something more sciencetific”. Of course for
different applications there are different optimal sets of these
predefined constants, but I think it’s possible to optimize current
constants by testing several sets of constants on some train dataset and measure efficiency of inline constants.
With this we can find better constants and potentially improve performance.

I found some related papers:

  1. - Good survey about
    machine-learning based optimizations for compilers in general

  2. - optimizing
    inlining constants for Java


  • didn’t read it yet, butt looks promising

So I have several questions:

  1. What do you think about the idea?

  2. Can you recommend anything related? Probably somebody already done
    this kind of research for Clang/GCC earlier and I didn’t find it.