a question about LLCO

Hi everybody,
  Recently, I found the Lifelong Code Optimization project on the
website. And I have a question here, would you please explain it for

  In the home page of the project, it is said that the Goal of the
project is to enable mordern programs to be optimized at link time
with all static binary code. Here I wonder, why the library code must
be static, i.e. why a dynamically linked program is not considered?
Since most of our daily used programs are dynamically linked.

  Thanks a lot.

Best wishes

Hi Terry,

I'm not part of that project but I'll take a stab at answering your
question. Vikram Adve is probably the person to answer.

The point of Lifelong Code Optimization is to continuously optimize the
code during its lifetime, even while it is running. By profiling the
code, it is possible to discover the program's hot spots and intensely
optimize those portions of the program. Since we're talking about pretty
intense optimization here, we're generally not talking about interpreted
or dynamically linked software. The overhead of dynamically linking a
library can be very large and it thwarts some of the goals of LLCO. When
the *whole* program is represented in LLVM, it is possible to apply
optimizations that you couldn't do otherwise. If portions of the program
are dynamically loaded then these optimizations are not available. For
example, if you have the entire program to ponder, it would allow you to
remove dead globals variables or functions. These things might become
dead via inlining or other code rearrangement.


Hi Reid,
  I just made an experiment and found that if I want to keep the
relocation information available I have to statically link the program
as stated in ALTO.

  I use the GNU binutis 2.15 and GNU CC 3.4.1, but I think the LLCO
will not be limited to this.

Best Wishes

Hi Terry,

Reid is exactly right about the benefits of static (link-time) optimization for whole programs. When all libraries are available, it could alllow significantly better optimization without run-time overhead.

But it is increasingly common today for libraries to be dynamically linked. In these cases, you could get the benefits of LLVM optimization in two ways, *if* you compile the library also to LLVM instead of native code:
(1) Optimize the library in the context of the program at (dynamic) link-time. You can do a surprising amount of analysis and optimization on fairly large programs in a few seconds. But to make this practical you probably have to engineer the optimizations to work on hot functions rather than the whole program.
(2) Optimize the program and library off-line between runs, cache the optimized code, and then verify at link-time that library versions have not changed.

Both of these are technically feasible but probably require significant engineering effort to do well. Also, we have no concrete evidence that any of this improves performance, though intuitively I would believe there are opportunities to improve performance substantially.



I just updated from source and got this at the top of the master Makefile:

DIRS = lib/System lib/Support utils lib

ifeq ($(MAKECMDGOALS),tools-only)
DIRS += tools
   ifneq ($(MAKECMDGOALS),libs-only)
     DIRS += runtime docs
     OPTIONAL_DIRS = examples projects

This causes my build to be incorrect. In the general case, it doesn't build the tools, and building the runtime without the tools causes the build to crash. Also, the examples directory is empty, so building into it causes an error. Can someone fix this? Or I can, with authorization.



Robert L. Bocchino Jr.
Ph.D. Student
University of Illinois, Urbana-Champaign

Sorry about that. This is now fixed.

As for the "examples directory is empty" part, I suggest you've gotten a
check out error or some such. All the files report as being present in
my tree. If you haven't already, you should use an alias like this:

alias cvsup='cvs update -ARPd 2>/dev/null'

to update. This gets you the root branch, latest version, getting rid of
empty directories and checking out new ones.