Question about using steensgaard's pointer analysis in poolalloc

Hi LLVM dev team:
I am now doing an experiment to comparing Steensgaard-style and Andersen-style pointer analysis on LLVM. Since steensgaard pointer analysis is in module “poolalloc”, so I installed poolalloc release 2.6 on my machine(intel X86_64 RedHatEnterpriseLinux 5.1, gcc-4.2.4), two directories “include” and “lib” were created after installation but no binary files generated.
I loaded poolalloc module into opt program according to the poolalloc “README” file:
opt -load -poolalloc
Here’s my command:
opt -load=/home/cmchen/INSTALL/llvm-common/lib/libpoolalloc_rt.so -poolalloc -analyze -print-alias-sets test.bc
and here’s the error message:
opt: Unknown command line argument ‘-poolalloc’ Try: 'opt --help’
So, my questions are:
1 Did I install the poolalloc module in a wrong way? or I just did not completely installed the module?
2 How should I load the module correctly? I just use the path of poolalloc shared object file to be the , cause no binary file has been generated.
Could you give me some suggestion? Thank you.

´ÏÃ÷³Â wrote:

Hi LLVM dev team:
I am now doing an experiment to comparing Steensgaard-style and
Andersen-style pointer analysis on LLVM. Since steensgaard pointer
analysis is in module "poolalloc", so I installed poolalloc release
2.6 on my machine(intel X86_64 RedHatEnterpriseLinux 5.1, gcc-4.2.4),
two directories "include" and "lib" were created after installation
but no binary files generated.

Generally, we don't use the files created by a "make install." Instead,
we generally just compile the code and use the files directly out of the
Release/bin (or Debug/bin) directories of the LLVM object tree.

I loaded poolalloc module into opt program according to the poolalloc
"README" file:
/opt -load <path to pool allocator> -poolalloc <other opt options>/
Here's my command:
*opt -load=/home/cmchen/INSTALL/llvm-common/lib/libpoolalloc_rt.so
-poolalloc -analyze -print-alias-sets test.bc*
and here's the error message:
*opt: Unknown command line argument '-poolalloc' Try: 'opt --help' *
So, my questions are:
1 Did I install the poolalloc module in a wrong way? or I just did not
completely installed the module?
2 How should I load the module correctly? I just use the path of
poolalloc shared object file to be the <path to pool allocator>, cause
no binary file has been generated.
Could you give me some suggestion? Thank you.

There are two problems:

1) You need to load the library containing DSA first. To do that, you
need to use the -load <path>/libLLVMDataStructure.so option.

2) I believe you are loading the wrong library. You want to load
libpoolalloc.so and not libpoolalloc_rt.so. The former is the LLVM
poolalloc transform pass; the latter is the run-time library
implementing the poolallocation functions.

If you're only interested in DSA (for points-to and alias analysis),
then you don't need poolalloc. Just use:

opt -load <path>/libLLVMDataStructure.so <dsa passes you want to run>

If you want to run poolalloc, then you do the following:

opt -load <path>/libLLVMDataStructure.so -load <path>/libpoolalloc.so
-poolalloc input.bc -f -o output.bc

A couple of warnings about the alias analysis passes in DSA:

1) I believe all the alias analysis implementations are based on DSA.
DSA is a unification-based algorithm, so I think you might see
unification even in non-unification algorithms like Andersons. Andrew,
does this sound correct?

2) I have not used the alias analysis passes in DSA, so I don't know how
well they work.

-- John T.

Hi, LLVM dev team:
Thanks for your suggestion, I have done the experiment to compare the two pointer analysis(Andersen and Steensgaard) methods in LLVM, but the result was unexpected. In each test, I compare these two methods using same optimization; There are several tests, each with a different optimization. The benchmark is all the 11 C programs in CINT2000 of SPEC. In all the tests, I found very little performance difference between Andersen and Steensgaard. Here is an example of the options in one of the tests:
llvmc -opt -Wo,=-O3 -Wo,=-load=path_of_libLLVMDataStructure.so -Wo,=-steens-aa
llvmc -opt -Wo,=-O3 -Wo,=-anders-aa
the other optimizations are: dead code elimination, dead store elimination, const propogation, O1, etc.

And now what confusing me are:
1 Did the optimizations really use the result of Andersen or Steensgaard?
2 It seems both Andersen and Steensgaard have a little trouble to compile some programs, such as: 175.vpr, 176.gcc and 197.parser for Andersen, 253.perlbmk for Steensgaard.
3 Are the difference between O1 and O3 in LLVM similar with that in gcc?

thank you very much.

-congming

在 2010年3月23日 下午9:59,John Criswell <criswell@uiuc.edu>写道:

Hi, LLVM dev team:
Thanks for your suggestion, I have done the experiment to compare the two pointer analysis(Andersen and Steensgaard) methods in LLVM, but the result was unexpected. In each test, I compare these two methods using same optimization; There are several tests, each with a different optimization. The benchmark is all the 11 C programs in CINT2000 of SPEC. In all the tests, I found very little performance difference between Andersen and Steensgaard. Here is an example of the options in one of the tests:
llvmc -opt -Wo,=-O3 -Wo,=-load=path_of_libLLVMDataStructure.so -Wo,=-steens-aa
llvmc -opt -Wo,=-O3 -Wo,=-anders-aa
the other optimizations are: dead code elimination, dead store elimination, const propogation, O1, etc.

And now what confusing me are:
1 Did the optimizations really use the result of Andersen or Steensgaard?
2 It seems both Andersen and Steensgaard have a little trouble to compile some programs, such as: 175.vpr, 176.gcc and 197.parser for Andersen, 253.perlbmk for Steensgaard.
3 Are the difference between O1 and O3 in LLVM similar with that in gcc?

Anyone can give me some hints? thanks

-William.C

在 2010年3月23日 下午9:59,John Criswell <criswell@uiuc.edu>写道:

Hi, LLVM dev team:
Thanks for your suggestion, I have done the experiment to compare the two
pointer analysis(Andersen and Steensgaard) methods in LLVM, but the result
was unexpected. In each test, I compare these two methods using same
optimization; There are several tests, each with a different optimization.
The benchmark is all the 11 C programs in CINT2000 of SPEC. In all the
tests, I found very little performance difference between Andersen and
Steensgaard. Here is an example of the options in one of the tests:
llvmc -opt -Wo,=-O3 -Wo,=-load=path_of_libLLVMDataStructure.so
-Wo,=-steens-aa
llvmc -opt -Wo,=-O3 -Wo,=-anders-aa
the other optimizations are: dead code elimination, dead store elimination,
const propogation, O1, etc.
And now what confusing me are:
1 Did the optimizations really use the result of Andersen or Steensgaard?

Probably not, unless you inserted the AA pass at the right place in
the pipeline. Also most of the optimizations you listed shouldn't use
it.

2 It seems both Andersen and Steensgaard have a little trouble to compile
some programs, such as: 175.vpr, 176.gcc and 197.parser for Andersen,
253.perlbmk for Steensgaard.

Andersen's has been removed, Steensgaard is not in mainline nor tested
against many programs very often (actually rarely used).

Andrew