LVLM runtime optimization during LLVM Analysis Passes.

Hi
Does current llvm support dynamic optimization level for generated llvm IR which means , is it possible to monitor some function’s execution pattern during the runtime and apply different optimization level for particular function using LLVM Analysis Passes.

Thanks

Regards
Sri.

On the optimization side, profile guided opts are currently somewhat limited. However, a couple of folks are actively working in this area.

All of the runtime support (i.e. recompiling, linking, etc…) is out of scope for LLVM. Since this is the majority of the work required to support PGO in a JIT, that’s probably your answer.

p.s. I’m answering what I think your question was. If this doesn’t help, you’ll need to clarify your question.

Philip

Hi Philip
I understood what you have said here. Basically you are saying , identifying the hot path during the runtime and compile those section again would be hard in llvm-IR. I am currently looking forward to do some analysis work on adaptive mechanism for llvm JIT so that , we can see some improvement when we see that high frequent function call during the run time. Do you have any idea , which approach would be more convenient to start this work.
Thanks.

Regards
Sri.

Doing this “in llvm-IR” is almost certainly the wrong approach. Decide on a profiling mechanism for your source language (instrumentation, sampling, etc…). Modify your IR generation to include profiling information. Modify your runtime to recompile/relink “hot” functions.

The only internal to LLVM parts are improving profile guided optimizations.

Philip