Hello everyone!
I am trying to wrap my head around the “runtimes build” design and how it applies to what I’m trying to do: bootstrap a toolchain that can act as either an X86 host compiler or as a cross-compiler for a baremetal ARM embedded system.
IIRC, the rationale of the runtimes build is that you want to build the runtimes with the just-built compiler, to produce a full coherent toolchain. I like this idea, but don’t see how it could work in one stage currently, because of the pesky culprit of libc.
I would think the process could go like this:
-
Build clang
-
Build the builtins from compiler-rt with
LLVM_BUILTIN_TARGETS=default;aarch64-unknown-elf
and in my case alsoBUILTINS_aarch64-unknown-elf_COMPILER_RT_BAREMETAL_BUILD=ON
-
Build libc so that the downstream runtimes (e.g., libc++) could actually succeed, and set the
RUNTIMES_aarch64-unknown-elf_CMAKE_SYSROOT
to point to it -
Build the other runtimes
The trouble is #3, since I am not sure how to orchestrate the building of a libc from the LLVM CMake build system. I know LLVM has a libc itself, but I am guessing it is not at the stage where it can exist completely freestanding (i.e., not needing another libc that it interposes?) And even if so, I don’t think being super lean for embedded was a design goal in any case.
So my question is: have I understood the situation correctly?
And what is the best way to deal with it? I can think of a few ways, e.g., a wrapper script that does three stages:
Stage 1: run CMake the first time, building only a host/AARCH64 clang and a compiler-rt as the only runtime, and enable the builtins for AARCH64
Stage 2: (outside of LLVM) use the new AARCH64 clang and its builtins to build an embedded libc and stash it somewhere
Stage 3: run CMake a second time, building everything and now we have the AARCH64 SYSROOT pointing at a libc
Thanks for your help!
Ken