Le mar. 3 avr. 2018 à 12:47, via llvm-dev <llvm-dev@lists.llvm.org> a
écrit :
All,
A recent commit, D43040/r324557, changed the behavior of the gold
plugin
when compiling with LTO. The change now causes the codegen
optimization
level to default to CodeGenOpt::Default (i.e., -O2) rather than use
the
LTO optimization level. The argument was made that the LTO
optimization
level should control the amount of cross-module optimizations done
by
LTO, but it should not control the codegen optimization level; that
should be based off of the optimization level used during the
initial
compilation phase (i.e., bitcode generation).
I actually don't understand this clearly.
Unless we're saying that we would change the IR optimization level
either using the -OX flag during LTO (which is clumsy, because what is
a "cross-module optimization" alone?), why would the `-OX` flag change
the Codegen optimization level when passed to clang without LTO, but
it wouldn't during LTO?
I'm simply stating the argument made by Peter in r324557; this is not my
opinion. Personally, I think it seems reasonable to allow the optimization
flag used during the link step to control the codegen optimization level.
However, this is no longer the case after r324557.
FWIW, I would be very much on-board with reverting r324557 and then
changing lld to mirror the behavior of the gold plugin, but I don't know if
that's the consensus in the community.
To answer your question Mehdi, what I mean by "cross-module optimization"
is simply a series of passes that operates on a module after having linked
parts of other modules into it, that would result in IPO between modules.
For example, an inlining pass followed by scalar optimization passes.
The way I think about LTO is that it effectively splits the pass pipeline
in two, which lets us put cross-module optimizations in the middle.
What this means semantically is that LTO opt level 0 would essentially run
the two parts of the pipeline one after the other, giving you essentially
the same binary as not-LTO, but it would allow for LTO-only features such
as CFI to work. One might have also chosen to compile parts of one's
program with different optimization levels, and those levels would need to
be respected by the code generator. For this to work, we must at least use
the same CG opt level that was used at compile time.
Higher LTO opt levels would result in more passes being run in the middle,
perhaps at more aggressive settings, which would result in more
cross-module optimizations. But we still should at least try to approximate
the optimization level requested for each particular function.
Ideally, we would use the same optimization level that would have been used
at compile time. Such an optimization level would be communicated via an
attribute, as proposed here. However, in the absence of that information,
it does seem reasonable to make a guess about the user intent from the LTO
opt level. If a user specifies an LTO opt level of 3, it probably means
that the user cares a lot about performance, so we can guess a CG opt level
of CodeGenOpt::Aggressive. Otherwise, we can guess a CG opt level of
CodeGenOpt::Default since this would seem to provide the best balance of
performance, code size and debuggability.
So this is the direction that I would propose:
- Remove ability to override CG opt level from LTO API. For now, we can
infer it from the LTO opt level as mentioned above.
- Add function attributes for signaling compile-time opt level and start
moving towards using them in preference to TargetMachine::OptLevel.
- Remove code for inferring CG opt level from LTO opt level, as it is now
redundant with the function attribute.
This would seem to get us to a desired state without regressing users who
might depend on being able to use the aggressive CG opt level from LTO.
Thoughts?
Peter
Are we encoding O1/O2/O3 optimization level into function attributes