Profiling LLVM JIT code

Hey guys,

I am currently working on a project that uses JIT compilation to compile incoming user requests to native code. Are there some best practises related to profiling the generated code?

My project uses gperftools pprof for profiling etc. Is there a way to hook the two up? Are there any other profiling method that works? This page describes how to debug JIT code with GDB. I wonder if something similar could be done for gperftools/pprof?

Regards,
– Priyendra

Hi Priyendra,

There is support for oprofile and Intel® VTune™ Performance Analyzer, but either one needs to be explicitly turned on during the build process. If you use MCJIT (as opposed to the older JIT) then oprofile support isn’t in place yet.

Both of these work by providing a JITEventListener that receives notification when new code is emitted and hooks it up to the profiling tool via some tool-specific notification API. I’m not familiar with pprof, but it probably wouldn’t be very difficult to write a new event listener to add support for pprof.

You can find the oprofile code in ‘llvm/lib/ExecutionEngine/OProfileJIT’ to use as an example.

-Andy

Thanks for the info. I am using old JIT. So that should not be a problem.

I will take a look at using oprofile. I have never used it - so will be somewhat of a learning curve.

I notice that the configure script has a --with-oprofile option. In addition to enabling that, is there something else that also needs to be done? My copy of LLVM is compiled with --enable-optimized. Will --with-oprofile work fine with that or should I disable optimized?

Regards,
– Priyendra

Profiling using oprofile should work just fine with the --enable-optimized option. If the function being JITed includes location metadata for a source file name and line number, that will be used. Otherwise, you’ll just get function names and addresses.

-Andy