I just found clang/llvm a few days ago. It looks very cool, but if you don’t mind a right-off-the-bat observation, somewhere in the Unix-like build recipe it would be very handy if you’d caution new users about the memory size need to build clang. My machine has 4GB of RAM and I had another 4GB of swap space–and that wasn’t enough. Another 4GB of swap wasn’t enough. Ultimately, I had to tick it up to a total of 10GB of swap–monitoring things with the “top” utility, the biggest memory requirement I saw was just a little short of 11GB.

In Fedora Linux, it’s easy enough to create temporary additional swap spaces, but I’ve no idea, if it’s not the same, how to do that in other distros.

Anyway, cool stuff, and I’m looking forward to fiddling with it.

Hi Chris,

My experience is that by far the peak memory consumption occurs during linking. Compilations don’t use much. And debug configurations seriously increase memory consumption during linking. This means, perversely, that it’s likely faster to do a Release build than a Debug build. I basically never do a Debug build because it takes too long.

I am curious what configuration you used (probably Debug?) and how many parallel threads you had. Because linking is the memory hog, you might try using ‘-DLLVM_PARALLEL_LINK_JOBS=1’ which might help avoid swapping (which seriously slows down a build).



P.S. Yeah, some cautionary words about memory consumption seem like a good idea.

Despite having 32GB of RAM (and not much swap-space), building a debug version of LLVM + Clang with -j N, and having N > ~10 will make my machine almost stop (to the level where I hit a key and it takes several seconds to respond to that single keypress).

Non-debug builds are much less memory hungry, but of course if you actually want to debug your own code when it crashes inside some LLVM function (becasue you passed in something bad), it is a lot harder. I have both release and debug builds on my machine, for that very reason.

What I find useful is to pass a small value like -DLLVM_PARALLEL_LINK_JOBS=2 into the cmake/ninja build. For me, it's always been too many parallel link jobs that would have brought the build machine to its knees.

Ah, that sounds very helpful. I'll give that a try next time I'm fetching a
new version (I don't update my LLVM + Clang every time I work on my
project, as it would reduce my work throughput even more than it currently
is by other distractions).

Creating shared libs instead of static libs using -DBUILD_SHARED_LIBS=on will help as the memory requirement for linking executables will be much smaller. (Even though I don’t recall if Clang’s linking is affected by this or just LLVM’s)