difference with autotools, cmake and ninja building methods

When I see one book about llvm and choose the building method
between autotools, cmake, and ninja building methods, I was confused.
Is there any link about this content?
    Thanks for reply.

The autotools build system is officially deprecated and will be removed in a future release.

CMake is the recommended configuration system, but it is only a configuration system. It generates build files for multiple different build systems. One of the most popular build systems is Ninja. You cannot build LLVM with Ninja without using CMake, but CMake doesn’t require Ninja. You can use CMake to generate Makefiles as well as Xcode and Visual Studio projects, and more.

This page has the LLVM project’s documentation on using CMake: http://llvm.org/docs/CMake.html

-Chris

The autotools build system is officially deprecated and will be removed in a
future release.

CMake is the recommended configuration system, but it is only a
configuration system. It generates build files for multiple different build
systems. One of the most popular build systems is Ninja. You cannot build
LLVM with Ninja without using CMake, but CMake doesn’t require Ninja. You
can use CMake to generate Makefiles as well as Xcode and Visual Studio
projects, and more.

I often used cmake to compile llvm and clang. But I found a boring
problem: when I update all the repositories - llvm,clang,compiler-rt
and try to recompile, but I can't compile source code successfully and
it takes me much time to recompile. Does anyone has advice about this
problem?

2015-11-30 12:58 GMT+08:00 Chris Bieneman <beanz@apple.com>:

The autotools build system is officially deprecated and will be removed in a
future release.

CMake is the recommended configuration system, but it is only a
configuration system. It generates build files for multiple different build
systems. One of the most popular build systems is Ninja. You cannot build
LLVM with Ninja without using CMake, but CMake doesn’t require Ninja. You
can use CMake to generate Makefiles as well as Xcode and Visual Studio
projects, and more.

I often used cmake to compile llvm and clang. But I found a boring
problem: when I update all the repositories - llvm,clang,compiler-rt
and try to recompile, but I can’t compile source code successfully

Do you mean when you update the repositories you can’t compile llvm, clang, and compiler-rt? Do you have a specific error? Keep in mind the llvm, clang, and compiler-rt repositories need to be kept in sync. You can’t update one without updating them all.

and
it takes me much time to recompile.

The clang compiler is a quite large source project, depending on your hardware it may take a while to compile. Ninja is better than any other tool that I’m aware of at minimizing the work required during incremental builds.

-Chris

When it comes to build time, I ran into similar issues the first time
(quite recently, in fact) that I tried to compile LLVM and Clang in
accordance with the Clang getting started document. After some
searching, I found the following blog posts containing various tips
for speeding up your build:

http://blogs.s-osg.org/an-introduction-to-accelerating-your-build-with-clang/
http://blogs.s-osg.org/a-conclusion-to-accelerating-your-build-with-clang/

Some suggestions therein (incl. the parameters to pass to cmake for
each bullet point):
* As already mentioned, use ninja instead of make. (-G "Ninja")
* Build with Clang instead of GCC. Rather than doing the bootstrapping
yourself, you can also download the binaries for a previous release
and then use these for building the snapshot version. (e.g.,
-DCMAKE_C_COMPILER=/usr/bin/clang
-DCMAKE_CXX_COMPILER=/usr/bin/clang++)
* If you're mainly interested in Clang, you can restrict the number of
targets to build for. (e.g., -DLLVM_TARGETS_TO_BUILD="X86")

The referenced blog posts provides many more suggestions, but by using
the above I was able to bring my build time down from over two hours
to 'only' 42 minutes (although I read stories online from people who
need less than 15 minutes for a full build, so perhaps I still missed
something).

I'd be happy to hear about any further tips.

The autotools build system is officially deprecated and will be removed in a
future release.

CMake is the recommended configuration system, but it is only a
configuration system. It generates build files for multiple different build
systems. One of the most popular build systems is Ninja. You cannot build
LLVM with Ninja without using CMake, but CMake doesn’t require Ninja. You
can use CMake to generate Makefiles as well as Xcode and Visual Studio
projects, and more.

I often used cmake to compile llvm and clang. But I found a boring
problem: when I update all the repositories - llvm,clang,compiler-rt
and try to recompile, but I can't compile source code successfully

Do you mean when you update the repositories you can’t compile llvm, clang,
and compiler-rt? Do you have a specific error? Keep in mind the llvm, clang,
and compiler-rt repositories need to be kept in sync. You can’t update one
without updating them all.

I always "svn update" three repositories. I don't have a specific error.

and
it takes me much time to recompile.

The clang compiler is a quite large source project, depending on your
hardware it may take a while to compile. Ninja is better than any other tool
that I’m aware of at minimizing the work required during incremental builds.

But I have a problem about compiling with Ninja. I did compilation in
Debian stable and testing.
There are always errors when I do "ninja" command.

FAILED: : && /usr/bin/c++ -fPIC -fvisibility-inlines-hidden -Wall -W
-Wno-unused-parameter -Wwrite-strings -Wcast-qual
-Wno-missing-field-initializers -pedantic -Wno-long-long
-Wno-maybe-uninitialized -Wnon-virtual-dtor -Wno-comment -std=c++11
-ffunction-sections -fdata-sections -fno-common -Woverloaded-virtual
-fno-strict-aliasing -Wl,-allow-shlib-undefined -Wl,--export-dynamic
-Wl,-O3 tools/clang/tools/driver/CMakeFiles/clang.dir/driver.cpp.o
tools/clang/tools/driver/CMakeFiles/clang.dir/cc1_main.cpp.o
tools/clang/tools/driver/CMakeFiles/clang.dir/cc1as_main.cpp.o -o
bin/clang-3.8 lib/libLLVMAArch64CodeGen.a
lib/libLLVMAArch64AsmPrinter.a lib/libLLVMAArch64AsmParser.a
lib/libLLVMAArch64Desc.a lib/libLLVMAArch64Info.a
lib/libLLVMAArch64Disassembler.a lib/libLLVMAMDGPUCodeGen.a
lib/libLLVMAMDGPUAsmPrinter.a lib/libLLVMAMDGPUAsmParser.a
lib/libLLVMAMDGPUDesc.a lib/libLLVMAMDGPUInfo.a
lib/libLLVMARMCodeGen.a lib/libLLVMARMAsmPrinter.a
lib/libLLVMARMAsmParser.a lib/libLLVMARMDesc.a lib/libLLVMARMInfo.a
lib/libLLVMARMDisassembler.a lib/libLLVMBPFCodeGen.a
lib/libLLVMBPFAsmPrinter.a lib/libLLVMBPFDesc.a lib/libLLVMBPFInfo.a
lib/libLLVMCppBackendCodeGen.a lib/libLLVMCppBackendInfo.a
lib/libLLVMHexagonCodeGen.a lib/libLLVMHexagonAsmParser.a
lib/libLLVMHexagonDesc.a lib/libLLVMHexagonInfo.a
lib/libLLVMHexagonDisassembler.a lib/libLLVMMipsCodeGen.a
lib/libLLVMMipsAsmPrinter.a lib/libLLVMMipsAsmParser.a
lib/libLLVMMipsDesc.a lib/libLLVMMipsInfo.a
lib/libLLVMMipsDisassembler.a lib/libLLVMMSP430CodeGen.a
lib/libLLVMMSP430AsmPrinter.a lib/libLLVMMSP430Desc.a
lib/libLLVMMSP430Info.a lib/libLLVMNVPTXCodeGen.a
lib/libLLVMNVPTXAsmPrinter.a lib/libLLVMNVPTXDesc.a
lib/libLLVMNVPTXInfo.a lib/libLLVMPowerPCCodeGen.a
lib/libLLVMPowerPCAsmPrinter.a lib/libLLVMPowerPCAsmParser.a
lib/libLLVMPowerPCDesc.a lib/libLLVMPowerPCInfo.a
lib/libLLVMPowerPCDisassembler.a lib/libLLVMSparcCodeGen.a
lib/libLLVMSparcAsmPrinter.a lib/libLLVMSparcAsmParser.a
lib/libLLVMSparcDesc.a lib/libLLVMSparcInfo.a
lib/libLLVMSparcDisassembler.a lib/libLLVMSystemZCodeGen.a
lib/libLLVMSystemZAsmPrinter.a lib/libLLVMSystemZAsmParser.a
lib/libLLVMSystemZDesc.a lib/libLLVMSystemZInfo.a
lib/libLLVMSystemZDisassembler.a lib/libLLVMX86CodeGen.a
lib/libLLVMX86AsmPrinter.a lib/libLLVMX86AsmParser.a
lib/libLLVMX86Desc.a lib/libLLVMX86Info.a lib/libLLVMX86Disassembler.a
lib/libLLVMXCoreCodeGen.a lib/libLLVMXCoreAsmPrinter.a
lib/libLLVMXCoreDesc.a lib/libLLVMXCoreInfo.a
lib/libLLVMXCoreDisassembler.a lib/libLLVMAnalysis.a
lib/libLLVMCodeGen.a lib/libLLVMCore.a lib/libLLVMipo.a
lib/libLLVMInstCombine.a lib/libLLVMInstrumentation.a lib/libLLVMMC.a
lib/libLLVMMCParser.a lib/libLLVMObjCARCOpts.a lib/libLLVMOption.a
lib/libLLVMScalarOpts.a lib/libLLVMSupport.a
lib/libLLVMTransformUtils.a lib/libLLVMVectorize.a lib/libclangBasic.a
lib/libclangCodeGen.a lib/libclangDriver.a lib/libclangFrontend.a
lib/libclangFrontendTool.a lib/libLLVMAArch64Desc.a
lib/libLLVMAArch64AsmPrinter.a lib/libLLVMAArch64Info.a
lib/libLLVMAArch64Utils.a lib/libLLVMAMDGPUAsmPrinter.a
lib/libLLVMAMDGPUUtils.a lib/libLLVMARMDesc.a
lib/libLLVMARMAsmPrinter.a lib/libLLVMARMInfo.a
lib/libLLVMBPFAsmPrinter.a lib/libLLVMHexagonDesc.a
lib/libLLVMHexagonInfo.a lib/libLLVMMipsAsmPrinter.a
lib/libLLVMMipsInfo.a lib/libLLVMMSP430AsmPrinter.a
lib/libLLVMNVPTXAsmPrinter.a lib/libLLVMPowerPCAsmPrinter.a
lib/libLLVMPowerPCInfo.a lib/libLLVMSparcAsmPrinter.a
lib/libLLVMSparcInfo.a lib/libLLVMSystemZDesc.a
lib/libLLVMSystemZAsmPrinter.a lib/libLLVMSystemZInfo.a
lib/libLLVMX86CodeGen.a lib/libLLVMX86Desc.a
lib/libLLVMX86AsmPrinter.a lib/libLLVMX86Utils.a lib/libLLVMX86Info.a
lib/libLLVMXCoreAsmPrinter.a lib/libLLVMAsmPrinter.a
lib/libLLVMSelectionDAG.a lib/libLLVMCodeGen.a lib/libLLVMXCoreInfo.a
lib/libLLVMMCDisassembler.a lib/libclangCodeGen.a lib/libLLVMipo.a
lib/libLLVMVectorize.a lib/libLLVMInstrumentation.a
lib/libLLVMObjCARCOpts.a lib/libLLVMScalarOpts.a
lib/libLLVMInstCombine.a lib/libLLVMTarget.a lib/libLLVMBitWriter.a
lib/libLLVMIRReader.a lib/libLLVMAsmParser.a lib/libLLVMLinker.a
lib/libLLVMTransformUtils.a lib/libLLVMAnalysis.a
lib/libLLVMProfileData.a lib/libLLVMObject.a
lib/libclangRewriteFrontend.a lib/libclangARCMigrate.a
lib/libclangStaticAnalyzerFrontend.a lib/libclangFrontend.a
lib/libclangDriver.a lib/libLLVMOption.a lib/libclangParse.a
lib/libLLVMMCParser.a lib/libclangSerialization.a
lib/libLLVMBitReader.a lib/libclangSema.a lib/libclangEdit.a
lib/libclangStaticAnalyzerCheckers.a lib/libclangStaticAnalyzerCore.a
lib/libclangAnalysis.a lib/libclangAST.a lib/libclangRewrite.a
lib/libclangLex.a lib/libclangBasic.a lib/libLLVMCore.a
lib/libLLVMMC.a lib/libLLVMSupport.a -lrt -ldl -ltinfo -lpthread -lm
-Wl,-rpath,"\$ORIGIN/../lib" && :
collect2: fatal error: ld terminated with signal 9 [Killed]
compilation terminated.
[2706/2983] Linking CXX executable bin/clang-check
ninja: build stopped: subcommand failed.

This looks like the linker is running out of memory. This is a huge link job and BFD linker will consume at least one GB of RAM, possibly far more depending on your build config. If you’re on a 32-bit platform, then you won’t be able to do this with a debug build.

The big speedup if you’re using BFD ld is to enable shared library support. This is a particularly big win if you’re doing incremental builds a lot, because the extra startup time overhead is likely to be far less than the 4-5 minutes of extra time spent linking, even on a fast machine.

If you do want to do static linking, then your best bet is to do ninja -k 10, which should let it skip over the link jobs that fail, but keep compiling the sources, then ninja -j1, which will make it do the remaining steps one at a time. In general, the defaults for ninja work well for LLVM/Clang if you have at least one GB of RAM per core.

On a modern laptop, you should be able to do a clean build in 5-10 minutes, but BFD ld can be a bottleneck. It would be nice if CMake had an option to stick -fuse-ld=gold (or -fuse-ld=lld) in all of the correct places, but when I’ve tried poking the link flags to add this myself I’ve found weird linking errors that I haven’t had time to debug.

David

I seem to have had success with -DCMAKE_C_FLAGS="-fuse-ld=gold"
-DCMAKE_CXX_FLAGS="-fuse-ld=gold", but YMMV.

Alex

David Chisnall via llvm-dev <llvm-dev@lists.llvm.org> writes:

collect2: fatal error: ld terminated with signal 9 [Killed]
compilation terminated.
[2706/2983] Linking CXX executable bin/clang-check
ninja: build stopped: subcommand failed.

This looks like the linker is running out of memory. This is a huge
link job and BFD linker will consume at least one GB of RAM, possibly
far more depending on your build config. If you’re on a 32-bit
platform, then you won’t be able to do this with a debug build.

The big speedup if you’re using BFD ld is to enable shared library
support. This is a particularly big win if you’re doing incremental
builds a lot, because the extra startup time overhead is likely to be
far less than the 4-5 minutes of extra time spent linking, even on a
fast machine.

If you do want to do static linking, then your best bet is to do ninja
-k 10, which should let it skip over the link jobs that fail, but keep
compiling the sources, then ninja -j1, which will make it do the
remaining steps one at a time. In general, the defaults for ninja
work well for LLVM/Clang if you have at least one GB of RAM per core.

There's a cmake parameter `-DLLVM_PARALLEL_LINK_JOBS=1` that will limit
the number of concurrent link jobs. That makes for a better experience
than running ninja twice :slight_smile:

>
> collect2: fatal error: ld terminated with signal 9 [Killed]
> compilation terminated.
> [2706/2983] Linking CXX executable bin/clang-check
> ninja: build stopped: subcommand failed.

This looks like the linker is running out of memory. This is a huge link
job and BFD linker will consume at least one GB of RAM, possibly far more
depending on your build config. If you’re on a 32-bit platform, then you
won’t be able to do this with a debug build.

With Fission (AKA Split Dwarf) you might be able to manage it (check for
the BLAH_BLAH_SPLIT_DWARF option in the CMake config)

The big speedup if you’re using BFD ld is to enable shared library
support. This is a particularly big win if you’re doing incremental builds
a lot, because the extra startup time overhead is likely to be far less
than the 4-5 minutes of extra time spent linking, even on a fast machine.

If you plan to run the regression tests with that build, you'll probably
lose the time you gain by linking - the startup of all the llvm tools for
all those little tests adds up. A "make check-all" of clang/llvm, at least
on my machine (32 core, 32GB of RAM, running of an SSD) is slower (even if
you just touch one cc file - causing maximal linking, minimal compiling,
etc) with a shared library build, unfortunately.

If you do want to do static linking, then your best bet is to do ninja -k
10, which should let it skip over the link jobs that fail, but keep
compiling the sources, then ninja -j1, which will make it do the remaining
steps one at a time. In general, the defaults for ninja work well for
LLVM/Clang if you have at least one GB of RAM per core.

I think there's a ninja flag for the number of link jobs you want to run in
parallel, separate from the usual parallelism.

Are you using bfd ld? On the machine I use for testing (32 cores, 256GB RAM, spinning rust disks but with a 512GB SSD used for L2ARC / ZIL), it takes longer to do the final link of a debug build of clang than it takes to run the entire test suite. With ld64 or gold, the link is much, much faster (the RAM requirements are also a lot lower) - the final link of a debug build with ld64 is only a few seconds.

David