The libc implementation for the GPUs

I can only cite the documentation:

cmake ../llvm -G Ninja                                \
   -DLLVM_ENABLE_PROJECTS="clang;lld;compiler-rt"        \
   -DLLVM_ENABLE_RUNTIMES="libc;openmp"                  \
   -DCMAKE_BUILD_TYPE=<Debug|Release>  \ # Select build type
   -DLLVM_LIBC_FULL_BUILD=ON           \ # We need the full libc
   -DLLVM_LIBC_TARGET_OS=gpu           \ # Build in GPU mode
   -DLLVM_LIBC_GPU_ARCHITECTURES=all   \ # Build all supported architectures
   -DCMAKE_INSTALL_PREFIX=<PATH>       \ # Where 'libcgpu.a' will live
$> ninja install

I’m not sure why that file is failing to compile. It should be using the system libraries. The output file should be somewhere like ./runtimes/runtimes-bins/libc/cpu_features/check_cpu_features.cpp. Can you compile that normally outside of CMake? It should just be using the clang just built but it’s possible that it can’t find the system headers normally.

On JURECA at JSC, I was able to build the libc, though.

The file is libc/cmake/modules/cpu_features/check_cpu_features.cpp.in. It indeed includes cstdio.

Did the GPU run failed and the normal run succeeded? Did you run the GPU run on aarch64-unknown-linux-gnu and the normal libc on Intel Linux?

JURECA is x86 based.

Ah right. I was just trying to see if it builds on arm64, I don’t think Mac’s GPUs are supported by LLVM.