I’m assisting my doctor who is doing a research and he wants to use the llvm compiler, my job is to profile build the benchmarks using llvm-prof.
What i want to know is the following
1- does llvm support profile feedback optimizations!?
2- when i’ve used the llvm-prof it’s input is an object file (not binary as other compilers) my question is how could I profile a whole benchmark program using the llvm-prof ?
3- is there a way to print the spill code information (e.g. spill code count in a single function or basic block) ?
2- when i’ve used the llvm-prof it’s input is an object file (not binary as other compilers) my question is how could I profile a whole benchmark program using the llvm-prof ?
I haven’t done it, but I think the correct answer is to use llvm-ld to generate a single bitcode file, then run llvm-prof.
3- is there a way to print the spill code information (e.g. spill code count in a single function or basic block) ?
-stats give you aggregate counts. Unfortunately, I don’t know a way to do per-function reporting without using llvm-extract.
You might be able to scrape -debug-only=spiller output for block info.
You might want to read the llvm-prof documentation if you haven’t already: . The documentation mentions a script in the utils directory that automates some of the profiling tasks for you. I suspect the way that llvm-prof works is to compile your whole program to a single LLVM bitcode file, run a transform on it, and then generate native code, link in the LLVM profiling run-time library, and then run the program. You then use llvm-prof to analyze the original bitcode file and the output from running the program to get the report. That’s just a guess, though; I’ve never used llvm-prof myself. I bet looking at the script in the utils directory will shed light on how to use llvm-prof. – John T. On 3/21/11 4:46 PM, Andrew Trick wrote:
Please note that it is a very basic tutorial, with a simple example.
But it can be used as a first contact with llvm profiling framework. I
hope it can be useful.