Recommended computer resources to build llvm

I've got a i7 with 12 logical cores and 16GB of RAM I successfully
built RELEASE_390/final but for the last 100 or so files I'd to use
"ninja -j2" so as not to keep from swapping in the best case and and
in the worst case the build kills itself without completing because
apparently its run out of memory.

For the first 3200 files or so it was doing just fine with "ninja"
which is probably "ninja -j12" but as I said for the last 100 or so
files I had to use "ninja -j2" to get it to complete.

So what is the recommended computer resources, and I guess RAM size in
particular, for building llvm "quickly"?

-- wink

I build llvm with -j 4 without swapping with 16 GB of RAM. I build it with LLVM_ENABLE_DYLIB and LLVM_LINK_DYLIB which is probably helping.

The amount of RAM consumed is proportional to the number of parallel build process. Also swapping does not necessarily slow down compilation: the OS can swap slow process to use more RAM as a file cache if beneficial.
Swapping with -j 12 should be faster than not swapping with -j 2 as you are guaranteed to max out the usage of RAM.

That’s a reasonably well resourced machine, though the last couple of years I’ve been spec’ing quad core (plus HT) machines with 32 GB rather than 16 GB, and you’ve got 50% more cores.

Still that should be plenty for 12 compiler instances. I suspect the problem is something else, such as link steps.

I haven’t looked at the actual ninja rules, but you can specify a smaller “-j” setting for particular kinds of build steps. See:

https://ninja-build.org/manual.html#ref_pool

I have no idea whether the llvm ninja rules currently include something like that.

Yes, it did appear to be in the linking steps and initially I had only
a 240MB swap on my SSD and the build failed and my chromium browser
tabs all died, probably because of CPU starvation. I then added a 16GB
swap file on a HD and computer still got very slow as the resources
got used up and it was tough to do anything on the computer even
control C took a minute or more to abort the compilation. I tried -j8
more or less the same thing so I then dropped to -j2 and finally it
succeeded.

So it sounds like for 12 CPU's I should bump up the RAM size.

And the tip about changing -j for linker might also bear some fruit.

Thanks.

I'll try with the *_DYLIB on, txs.

You can use the ninja pool stuff with -DLLVM_PARALLEL_LINK_JOBS=4 (or 2,
or whatever) on the cmake invocation.

Bruce Hoult via llvm-dev <llvm-dev@lists.llvm.org> writes:

Where/when/how do you specify LLVM_ENABLE_DYLIB and LLVM_LINK_DYLIB?

I tried the following on the cmake command line:

$ cmake -G Ninja .. -DCMAKE_INSTALL_PREFIX=/home/wink/opt/llvm
-DLLVM_ENABLE_DYLIB=true -DLLVM_LINK_DYLIB=true

And got:
...
-- Performing Test CXX_SUPPORTS_NO_NESTED_ANON_TYPES_FLAG - Failed
-- Configuring done
-- Generating done
CMake Warning:
  Manually-specified variables were not used by the project:

    LLVM_ENABLE_DYLIB
    LLVM_LINK_DYLIB

-- Build files have been written to: /home/wink/foss/llvm.3.9.0/build

Are you building with debug symbols?

If so, then be aware that linking libLLVM and libLLDB will require 3-5GB of memory, just for the link step.

On my 8GB machine, I cannot use -j2 to compile LLVM with debug symbols, as I will page to death during linking.

That is because I mistyped it:
LLVM_ENABLE_LLVM_DYLIB:BOOL=ON
LLVM_LINK_LLVM_DYLIB:BOOL=ON

And again…
LLVM_BUILD_LLVM_DYLIB:BOOL=ON
LLVM_LINK_LLVM_DYLIB:BOOL=ON

This one is the good one… maybe.

I usually work on release+asserts build, which is much faster and doesn’t require that much RAM as Debug/RelWithDebInfo build. I only build debug when I need it :slight_smile:
Also lowering jobs for linking is very helpful when building with debug/LTO etc.

@djones, I'm not specifying debug just going with the default, is that
debug or release? How do I specify it?

@prazek, can you share your cmake and or ninja command line?

Actually are most people using -G Ninja for cmake as the documentation suggests?

I'll give it a try on the next build, txs again.

So with this cmake command line:

$ cmake -G Ninja .. -DCMAKE_INSTALL_PREFIX=/home/wink/opt/llvm
-DLLVM_PARALLEL_LINK_JOBS=2

And then running the build with "time ninja" took 21min:

$ time ninja
...
[3345/3481] Building C object
tools/llvm-c-test/CMakeFiles/llvm-c-test.dir/metadata.c.o
In file included from ../tools/llvm-c-test/llvm-c-test.h:17:0,
                 from ../tools/llvm-c-test/metadata.c:15:
../include/llvm-c/Core.h:83:23: warning: enumerator value for
‘LLVMNonLazyBind’ is not an integer constant expression [-Wpedantic]
     LLVMNonLazyBind = 1 << 31
                       ^
[3481/3481] Linking CXX executable bin/opt

real 20m57.995s
user 182m51.022s
sys 7m36.690s

For maybe 20 seconds 100% of RAM was used while compiling
and then twice during linking it was 95-98% so overall not to bad.
I'm now going to try "time ninja check-all" I'll report back what happens.

Thanks everyone for the assistance!!!

-- Wink

Re-add llvm-dev, which keeps getting lost in my replies...

Can you try the latest master branch? That has thinLTO enabled and
does not need more than 6 GB.

I’m not sure why ThinLTO would be a solution here: the original post does not even mention LTO and I’d expect ThinLTO to consume strictly more memory than a non-LTO build.
Also ThinLTO is not enabled by default for -flto.

In general, LLVM builds a lot of binaries from the same intermediate object files, which is not a favorable use-case for ThinLTO.

Using Gold and reducing the number of parallel link is probably more promising.

This is from personal experience. The same thing happened to me(4
core, 8 HT, 16 GB RAM). The build kept failing during linking stage. I
went and got another 16 GB but by this time the master branch started
consuming no more than 6 GB. The only change was the LTO linking, so I
assumed it was the thinLTO change and didn't investigate further.

Worth a try!