Controlling the LTO optimization level

Hi all,

I wanted to start a thread to discuss ways to control the optimization
level when using LTO. We have found that there are use cases for the LTO
mechanism beyond whole-program optimization, in which full optimization
is not always needed or desired. We started that discussion over in
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150316/266560.html
and I thought I'd summarize the problem and possible solutions here:

Problem

How much much of the LTO time is actually spent in the optimization passes?

– Sean Silva

Hi all,

I wanted to start a thread to discuss ways to control the optimization
level when using LTO. We have found that there are use cases for the LTO
mechanism beyond whole-program optimization, in which full optimization
is not always needed or desired. We started that discussion over in
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150316/266560.html
and I thought I'd summarize the problem and possible solutions here:

Problem
-------

As currently implemented, the control flow integrity checks in Clang rely on
a so-called bit set lowering pass to implement its checks efficiently. The
current implementation of the bit set lowering pass requires whole-program
visibility. The full details of why are described in the design document at:
http://clang.llvm.org/docs/ControlFlowIntegrityDesign.html

We currently achieve whole-program visibility using LTO. The trouble with LTO
is that it comes with a significant compile time cost -- on large programs
such as Chrome, compiling with link-time optimization can be over 7x slower
(over 3 hours has been measured) than compiling without.

We’ve had some recent improvements that speed things up considerably, and hopefully things will continue to get faster, but I’m sure there will always be cases where LTO is slower.

So I would like there to be a way for users to choose whether to apply
optimizations, and how much optimization to apply.

Achieving this requires a design for how users should specify the level of
optimization to apply, as well as a design for changes to the clang driver
and the various LTO plugins so that the plugin knows whether optimizations
are required.

Solutions
---------

1) Controlled at compile time

Strawman proposal for command line syntax:

-flto-level=X means optimize at level X. At link time, the LTO plugin will
take the maximum of all -flto-level flags and optimize at that level.

-flto-level is inferred from other flags if not specified:

-flto implies -flto-level=2.
If -flto not specified, -O >= 1 implies -flto-level=1.
Otherwise, default to -flto-level=0.

This is probably easier to implement in a supported way. We can pass the
LTO level to the linker via module flags as shown in the patches attached to
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150316/266778.html

2) Controlled at link time

-flto-level has the same semantics as in the previous sub-section, except it is
instead passed at link time.

This is to a certain extent possible to implement with libLTO by passing
-mllvm flags to the linker, or with gold by passing -plugin-opt flags.

According to Duncan, passing flags to libLTO this way is unsupported --
if we did want to accept flags at link time, and we absolutely don't want
to pass flags to the linker that way, I suppose we could do something like
have the clang driver synthesize a module containing the module flags we want.

Option (2) makes more sense to me, but I don’t like the idea of introducing a new command line option. At least for now, this seems like a fairly special-purpose request for CFI. I haven’t heard anyone else asking for LTO with minimal optimization. How about if you just pass the “-mllvm” options yourself when using CFI?

If it turns out that there are lots of people who want this feature, I could imagine that we might someday repurpose the existing -O optimization options to pass something to the linker to control LTO optimization. The downside of that is the clang driver doesn’t know whether the link will involve LTO or not, so it would have to pass those flags to the linker all the time. That’s not a real problem, but it’s just extra complexity that doesn’t seem justified unless it benefits more people.

>
> Hi all,
>
> I wanted to start a thread to discuss ways to control the optimization
> level when using LTO. We have found that there are use cases for the LTO
> mechanism beyond whole-program optimization, in which full optimization
> is not always needed or desired. We started that discussion over in
> http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150316/266560.html
> and I thought I'd summarize the problem and possible solutions here:
>
> Problem
> -------
>
> As currently implemented, the control flow integrity checks in Clang rely on
> a so-called bit set lowering pass to implement its checks efficiently. The
> current implementation of the bit set lowering pass requires whole-program
> visibility. The full details of why are described in the design document at:
> http://clang.llvm.org/docs/ControlFlowIntegrityDesign.html
>
> We currently achieve whole-program visibility using LTO. The trouble with LTO
> is that it comes with a significant compile time cost -- on large programs
> such as Chrome, compiling with link-time optimization can be over 7x slower
> (over 3 hours has been measured) than compiling without.

We’ve had some recent improvements that speed things up considerably, and hopefully things will continue to get faster, but I’m sure there will always be cases where LTO is slower.

Today I found http://reviews.llvm.org/D8431 which seems to fix one of the
big performance issues I was suffering from. Binary size is still an issue
though and I've found opt-level=1 makes a significant improvement there.

> So I would like there to be a way for users to choose whether to apply
> optimizations, and how much optimization to apply.
>
> Achieving this requires a design for how users should specify the level of
> optimization to apply, as well as a design for changes to the clang driver
> and the various LTO plugins so that the plugin knows whether optimizations
> are required.
>
> Solutions
> ---------
>
> 1) Controlled at compile time
>
> Strawman proposal for command line syntax:
>
> -flto-level=X means optimize at level X. At link time, the LTO plugin will
> take the maximum of all -flto-level flags and optimize at that level.
>
> -flto-level is inferred from other flags if not specified:
>
> -flto implies -flto-level=2.
> If -flto not specified, -O >= 1 implies -flto-level=1.
> Otherwise, default to -flto-level=0.
>
> This is probably easier to implement in a supported way. We can pass the
> LTO level to the linker via module flags as shown in the patches attached to
> http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150316/266778.html
>
> 2) Controlled at link time
>
> -flto-level has the same semantics as in the previous sub-section, except it is
> instead passed at link time.
>
> This is to a certain extent possible to implement with libLTO by passing
> -mllvm flags to the linker, or with gold by passing -plugin-opt flags.
>
> According to Duncan, passing flags to libLTO this way is unsupported --
> if we did want to accept flags at link time, and we absolutely don't want
> to pass flags to the linker that way, I suppose we could do something like
> have the clang driver synthesize a module containing the module flags we want.

Option (2) makes more sense to me, but I don’t like the idea of introducing a new command line option.

I assume you mean a driver command line option.

At least for now, this seems like a fairly special-purpose request for CFI. I haven’t heard anyone else asking for LTO with minimal optimization. How about if you just pass the “-mllvm” options yourself when using CFI?

That would work, if we had something like a -opt-level flag that the gold and
libLTO plugins understand. (We already have disable-opt on the libLTO side,
but I'd still like a way of saying opt-level=1).

If it turns out that there are lots of people who want this feature, I could imagine that we might someday repurpose the existing -O optimization options to pass something to the linker to control LTO optimization. The downside of that is the clang driver doesn’t know whether the link will involve LTO or not, so it would have to pass those flags to the linker all the time. That’s not a real problem, but it’s just extra complexity that doesn’t seem justified unless it benefits more people.

Seems reasonable.

Thanks,

Having the analogous of -O0/-O1/-O2/-O3 for the LTO pipeline makes
sense I think.

I agree that something along option number 2 is probably the best.
Some questions:

* Should "clang -O3 foo.o -o foo" use LTO with -O3?
* Should "clang foo.o -o foo" use LTO with -O0? That would be a fairly
big change. Maybe we could make the LTO default be 3?
* Should we just add a --ltoO to the clang driver that is independent of -O?
* Some linkers already take a -O(1,2,3) option. Should we try to
forward that or should we differentiate LTO optimizations and general
linker optimizations?

If we want to differentiate linker and LTO optimizations, adding a -O
plugin option to the gold plugin should be fine. As Bob points out,
for ld64 for now we would just use -mllvm.

Cheers,
Rafael

Having the analogous of -O0/-O1/-O2/-O3 for the LTO pipeline makes
sense I think.

I agree that something along option number 2 is probably the best.
Some questions:

  • Should “clang -O3 foo.o -o foo” use LTO with -O3?
  • Should “clang foo.o -o foo” use LTO with -O0? That would be a fairly
    big change. Maybe we could make the LTO default be 3?
  • Should we just add a --ltoO to the clang driver that is independent of -O?
  • Some linkers already take a -O(1,2,3) option. Should we try to
    forward that or should we differentiate LTO optimizations and general
    linker optimizations?

The linker taking -O1,2,3 as a start is fine for sure. I’d rather this go from a clang driving everything perspective than a linker driving everything, but that ship may have sailed.

If we want to differentiate linker and LTO optimizations, adding a -O
plugin option to the gold plugin should be fine. As Bob points out,
for ld64 for now we would just use -mllvm.

Sure. A better command line interface similar to the one that we already have in clang to deal with enabling/disabling passes (or, perhaps, one that’s even better - we’re not very good at that at the moment) would be ultimately a good place to be. Otherwise the interface is just going to be some sort of special case hell for what everyone wants to do at the LTO level.

-eric

Thanks all. The attached implements the proposal of adding -O to libLTO,
llvm-lto and the gold plugin, which seems to have consensus as a reasonable
first step.

Thanks,

0001-libLTO-llvm-lto-gold-Introduce-flag-for-controlling-.patch (13.1 KB)

+ OptLevel = opt[1] - '0';

Please check and reject things like -OX at least in the gold plugin.

Can you add a test showing that

* createLowerBitSetsPass is run at -O0
* the addLateLTOOptimizationPasses passes are run at -O1, but not -O0

I think the patch is fine otherwise, but wait for a review from
someone on the ld64 side (Duncan, Manman or Bob for example).

Thanks,
Rafael

This SGTM in principle. The specific set of passes that you've enabled at
-O1 seems strangely small to me, but we can adjust that later.

Should this -O level be shared with CodeGen?

+ OptLevel = opt[1] - '0';

Please check and reject things like -OX at least in the gold plugin.

Same with the libLTO API and `llvm-lto`.

It might be nice to write a single utility function to verify this that's
shared between the three consumers?

This SGTM in principle. The specific set of passes that you've enabled at
-O1 seems strangely small to me, but we can adjust that later.

Sure. (It seemed best to me for this set to be small, at least for now,
because LTO is single-threaded, unlike regular optimization in a parallelized
build system, and so the cost of enabling a given pass is larger, especially
at lower optimization levels. The passes I selected seemed to provide enough
benefit to justify the cost.)

Should this -O level be shared with CodeGen?

Seems reasonable, done.

>
> + OptLevel = opt[1] - '0';
>
> Please check and reject things like -OX at least in the gold plugin.

Same with the libLTO API and `llvm-lto`.

It might be nice to write a single utility function to verify this that's
shared between the three consumers?

Maybe. That might wait until the consumers share some kind of common flag
parsing API though. At the moment they're different enough that it's simplest
to just write out the check in each one.

>
> Can you add a test showing that
>
> * createLowerBitSetsPass is run at -O0
> * the addLateLTOOptimizationPasses passes are run at -O1, but not -O0

Done.

Thanks,

0001-libLTO-llvm-lto-gold-Introduce-flag-for-controlling-.patch (17.8 KB)

Looks good I guess.

The comment over the lower bitsets patch should probably reference why it’s “required” and what it’s for etc. This kind of pass addition is honestly one of the reasons why I was wanting command line options. It doesn’t affect too much by adding it and then removing it, but unlike some of the target passes that are subtarget dependent the bitset pass is only useful if you happened to build with the cfi stuff on.

-eric

Looks good I guess.

r232769.

The comment over the lower bitsets patch should probably reference why it's
"required" and what it's for etc.

Done.

Thanks,