JIT Optimization Levels

Hello,
there are several optimization levels in lli like O0, O1, O2, O3. What does they mean? how the run time optimization is performed in LLVM JIT?

I am working on a project where my goal is to study the impact of lli optimizations. here my IR is already optimized through opt. now i have to perform lli optimizations provided i am giving varying inputs at run time. so i suppose here my optimization level also depends on runtime input. like,
i am using following command.

time lli -O3 sum-vec03.ll 5 2

also after acquiring the execution time at different scenarios i plan to use some machine learning algorithm in order to predict the appropriate flag setting for given program and run time input.

Please help. Does this look appropriate approach? is my methodology correct?

Thank You

Hello,
there are several optimization levels in lli like O0, O1, O2, O3. What
does they mean? how the run time optimization is performed in LLVM JIT?

I am working on a project where my goal is to study the impact of lli
optimizations. here my IR is already optimized through opt. now i have to
perform lli optimizations provided i am giving varying inputs at run time.
so i suppose here my optimization level also depends on runtime input. like,
i am using following command.

time lli -O3 sum-vec03.ll 5 2

If I'm not wrong, there are 2 distinct subsystems were optimizations can be
applied. In case of lli, JIT, llc, and others the optimization level is
applied to the CodeGen. The LLVM IR is the input for them and is not
optimized. If you need both you have to run opt to optimize IR first. Then
lli.

I have already applied opt. so here my lli has IR emitted from opt which is optimized.