Which optimizations are done by Clang, not LLVM

Hello.

I am trying to find in Clang sources which kind of optimizations do
directly Clang, not LLVM. What I want to do - I have some ideas for
optimization which can be done only for C++ code. And as I understand
Clang is the right place for doing this but I cannot find any example.

Can you help me a little bit and provide any existent example in Clang
sources?

Thank you.

Very few, if any, optimizations are done by Clang - where possible Clang augments the LLVM IR with extra information to enable LLVM to perform the optimizations.

Clang’s IR generation does vary depending on optimization level - but usually on in this sort of attribute-application context. (eg: applying the ‘inlinehint’ attribute above -O0, applying the ‘optnone’ attribute at -O0, etc).

Maybe some of the memcpy optimization (turning struct copying into memcpies) is done in the frontend - because it knows it can copy around the padding bytes if that’s more convenient.

Even very C+±y things like devirtualization are still, ideally, implemented in LLVM - trying to provide as general a tool as possible for frontends to express these kinds of constraints to the LLVM optimization pipeline.

  • Dave

IIRC, Clang CodeGen will do some branch-to-branch elimination, for blocks that are empty/missing at the source level. Then you have language-mandated things like RVO? But yeah, the front-end generally doesn’t want to do optimizations.

–paulr

IIRC, Clang CodeGen will do some branch-to-branch elimination, for blocks that are empty/missing at the source level. Then you have language-mandated things like RVO? But yeah, the front-end generally doesn't want to do optimizations.

We do a variety of optimizations in the frontend; we just don't think about most of them as optimizations because they're fairly easy and we always do them. For example, we devirtualize calls to virtual methods on local variables.

LLVM IR is poorly suited for many high-level optimizations because:
  - lowering to IR eliminates a lot of precise type and declaration information
  - lowering to IR makes it difficult to reason about the provenance of values at the right abstraction level for high-level optimization rules (e.g. all the stuff with pointer laundering for C++ devirtualization)
  - lowering to IR generally requires committing to concrete representations in ways that can inhibit or complicate optimizations (e.g. low-level value-copying routines that don't return anything that can serve as a value def for the copy)
  - IR-based optimizations tend to be abstract in ways that require uncomfortable trade-offs to enable (e.g. eagerly emitting TBAA metadata on every memory access, or eagerly instantiating and emitting inline functions for C++ devirtualization)
  - IR-based optimizations often require circumlocutions that interact poorly with other optimizations or otherwise prove unexpectedly difficult to lower to an optimal form (e.g. loads performed for llvm.expect)
  - high-level optimizations often have special-case exceptions which require extensions in order to express in IR or else the optimization has serious soundness problems (e.g. a number of cases with un-referenceable functions in v-tables)

Often we have a simple, local version of the optimization that we can do easily in the frontend which catches a lot of important use cases, and then extending that to cover more cases using LLVM-based data flow optimizations is a major research project. RVO/NRVO are exceptions only in that their simple, local versions are the only ones actually allowed.

John.