How to use LLVM-test-suite to experiment with alias analysis?

Hi,
I am new to LLVM and my requirement is to do a cross comparison of different alias analysis methods in LLVM. For that I am planning to use LLVM-test-suite and I am planning to compile each benchmark in suite with clang and apply optimizations on them and measure the number of pointers identified by each of them. Is there a solid way to execute this and is there any issue in what I am trying to do?

1 Like

LLVM has been using AAEvaluator to test alias analyses for quite some time, you can give it a try. If you really don’t want to change the code in tree, you can write a Pass plugin to insert that Pass into the Pass pipeline.
Though IIRC many useful AA results are hidden behind command line flags (e.g. this one) and it’s well known that (new PassManager) Pass plugins don’t work really well with LLVM’s command line infrastructure, so you might still need to change some code in tree eventually to get useful results.

1 Like

Thank you for the reply @mshockwave . I used several flags to extract those flags to extract alias informations corresponding to simple programs. But I want to benchmark with some complicated test cases where I can differentiate multiple alias analysis implementations. But I don’t have a proper idea how to use the LLVM-test-suite.

Right, I was suggesting to build llvm-test-suite with Clang that loads and runs the said Pass plugin.
If you’re not familiar with building llvm-test-suite I believe this page will be useful. Instead of running cmake command described in that page, you’ll run something like:

cmake -DCMAKE_C_COMPILER=<path to llvm build>/bin/clang \
      -DCMAKE_C_FLAGS="-fpass-plugin=/path/to/plugin.so"
      -C../test-suite/cmake/caches/O3.cmake \
      ../test-suite
1 Like

Hi @mshockwave , Thanks again. Can you explain what is =“-fpass-plugin=/path/to/plugin.so”, how to generate that file?

-fpass-plugin=... itself is a compiler flag you pass to Clang (i.e. CMAKE_C_FLAGS). The plugin.so is a (new PassManager) Pass plugin. You can checkout this page, especially the “Registering passes as plugins”, for more information.

1 Like

@mshockwave Seems using test-suite may be not the best way. I want to extend following functionality
to a much larger C++ projects to analyze the performances. Is there a more sophisticated approach for that?

clang++ -c -emit-llvm initial.cpp -o hello.bc
pt -enable-new-pm=0 --basic-aa --globals-aa --tbaa -aa-eval -print-all-alias-modref-info < hello.bc >/dev/null 2>hello.results

I guess you’re asking if there is any way to achieve those two commands in a single clang command.
If that’s the case and you don’t want to build a custom Clang: No. That’s why I was suggesting the plugin approach.

The reason being AAEvaluator is not included in any Pass pipelines used by Clang. So if you choose not to use the plugin approach, I think the only way is to change how Clang creates Pass pipeline, which is building a custom Clang.

I’m actually a little confused about this: llvm-test-suite is nothing but a (large) collection of C/C++ projects used to test LLVM / Clang. The way llvm-test-suite is usually used is that we have a custom Clang / LLVM, and we will use that custom compiler to build llvm-test-suite to see if the said compiler crashes or produces incorrect code or produces code that has performance regression. So…

To me llvm-test-suite is exactly what you’re looking for, no?

1 Like

@mshockwave yes, I was not sure whether llvm-test-suite would be the best match and seems that is? Sorry for the confusion.

On this not exactly. I don’t want to combine those two commands, I wanted to first create the .bc for the project and apply different optimizations separate;y on that. Though the procedure is obvious for small files I couldn’t find a way to apply “clang++” on large projects and apply “opt” on them. Again sorry for the confussion.

I guess you mean producing a single .bc file for the entire project comprises of potentially thousands of C/C++ files. If that’s the case you definitely need to use LTO (Link-Time Optimization) to produce that kind of “merged” bitcode file. (Theoretically you can use llvm-link but my experiences showed that that tool didn’t scale up really well)
Here are the steps:

  1. First, add the following compiler flag: -flto. In the case of CMake you can add something like -DCMAKE_C_FLAGS="-flto" / -DCMAKE_CXX_FLAGS="-flto" to your cmake invocation.
  2. Second, add the following linker flag: -flto -Wl,--plugin-opt=-lto-embed-bitcode=post-merge-pre-opt. In the case of CMake you can add something like -DCMAKE_EXE_LINKER_FLAGS="-flto -Wl,--plugin-opt=-lto-embed-bitcode=post-merge-pre-opt" to your cmake invocation.
  3. Build the project
  4. Let’s say an executable foo is now built. Run the following command to extract the (merged) LLVM bitcode embedded in the ELF: objcopy foo --dump-section .llvmbc=foo.bc. The extracted bitcode will be placed in foo.bc. You can use llvm-dis to get the textual LLVM IR.

Note that IIRC -lto-embed-bitcode is only exposed after LLVM 14. Also, (full) LTO will consume A LOT of memory. One way to mitigate this is using better linkers like LLD (preferred) or gold. For instance, to use LLD, add -fuse-ld=lld to the linker flag after you installed LLD.

1 Like

@mshockwave Thank you very much for the description. Rather than a simple .bc file I am trying to implement following steps.

  1. Compile the C++ project to a intermediated state (It doesn’t matter whether it is a single one or not)
  2. Apply different optimizations (such as steenguard analysis, Andersen analysis) on that compiled code to measure how well those Alias analysis methods perform.

If you really want to apply different optimizations from the exact same set of LLVM IR files (the intermediate state), LTO can help you on this too: an “object file” compiled from LTO mode is actually just a LLVM bitcode file. So you can follow step 1 in my previous reply, grab all the object files (e.g. find . -name "*.o") and start from there.

An alternative solution will be modifying Clang to allow AA pipeline (represented by AAManager instance) customization (through new compiler flags or environment variable, for instance), so that you can use that modified Clang to build target projects with different AA pipeline enumerations.
More specifically, right before the default AAManager is built and registered into FunctionAnalysisManager, preemptively register your custom AAManager into FAM.

1 Like

Thank you very much for your immense support @mshockwave . These insights are really valuable for me.