Tutorial: testing libc in riscv32 qemu

The following is a tutorial on how to create a riscv32 image using yocto so we can build and test LLVM’s libc.

1. Creating the riscv32 image

We are using yocto to generate the riscv32 image since it allows us to include the compiler along with the image; in our case, we’ll be including not only gcc but also Clang.

Yocto provides a GUI called toaster to configure and build images, however, I had issues when building the final image (some Python errors), so we’ll be using the cmd line to configure and build our image.

1.1 Cloning the necessary repositories

We need to clone three repositories:

  • poky: a minimal Linux distribution that yocto uses to create the image with our custom configuration.
  • meta-oe: a layer of metadata that provides additional recipes and configuration files for the OpenEmbedded build system
  • meta-clang: a layer of metadata that provides support for building the Linux kernel and user space applications using the Clang/LLVM compiler toolchain, as an alternative to the more commonly used GCC compiler toolchain.
git clone https://github.com/yoctoproject/poky.git
git clone https://github.com/kraj/meta-clang.git
git clone https://github.com/openembedded/meta-openembedded
The latest stable version of yocto is v4.1.3 langdale, however, I had several issues with the packages included in it, including several kernel panics when trying to compile libc, so I suggest using the ToT of each repo.

1.2 Configuring the image

Now that we cloned the repos, we need to configure poky with the custom packages we need to build libc.

cd poky/
source oe-init-build-env

Once you run these commands, you should now be in poky/build, which should contain two important files inside a conf directory:

  • conf/bblayers.conf: used to add the extra layers with recipes to build our image.
  • conf/local.conf: used to configure everything, from image size to the included packages.

In conf/bblayers.conf, you should include the path to the layers we just downloaded:

BBLAYERS = "\
  <path-to-poky-repo>/meta \
  <path-to-poky-repo>/meta-poky \
  <path-to-poky-repo>/meta-yocto-bsp \
  <path-to-meta-oe-repo>/meta-oe \
  <path-to-meta-oe-repo>/meta-python \
  <path-to-meta-oe-repo>/meta-networking \
  <path-to-meta-clang-repo> \
  "

In conf/local.conf, first, we need to configure yocto to include development tools, -dev packages and debug tools in the image. Search for the EXTRA_IMAGE_FEATURES variable and add tools-sdk dev-pkgs tools-debug. It should look like this:

EXTRA_IMAGE_FEATURES ?= "debug-tweaks tools-sdk dev-pkgs tools-debug"

You can also configure yocto to include profile tools, test tools, and source packages by changing this option. The conf file should have a list of available options when it’s first generated.

Second, we need to (at least) set the image size (variable IMAGE_ROOTFS_SIZE), set the target machine to riscv32 (variable MACHINE), and include the necessary packages to build clang (variable IMAGE_INSTALL:append). The following lines can be added to the end of the conf/local.conf. They included the mentioned changes.

 # sets image size to around 50GB
IMAGE_ROOTFS_SIZE ?= " 52300000"
DISTRO="poky"
PACKAGE_CLASSES="package_rpm"

# sets the target machine
MACHINE="qemuriscv32"
SSTATE_DIR="${TOPDIR}/../sstate-cache"

# include the following packages in the final image
IMAGE_INSTALL:append=" git cmake ninja htop vim bash-completion python3 python3-pip ntp mpfr bison flex dtc clang gdb"
IMAGE_FSTYPES="ext3 jffs2 tar.bz2"
DL_DIR="${TOPDIR}/../downloads"

1.3. Building the image

Once everything is set up, we are ready to build the image. From poky/build:

$ bitbake core-image-full-cmdline

Now sit back and wait. It took me a little more than 1 hour on a 32-core E5-2620 v4 @ 2.10GHz machine.

In the end, the image files should be placed in poky/build/tmp/deploy/images/qemuriscv32/:

$ ls tmp/deploy/images/qemuriscv32/
core-image-full-cmdline.env                                          fw_jump.elf
core-image-full-cmdline-qemuriscv32-20230425192407.qemuboot.conf     fw_payload.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.ext3       fw_payload.elf
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.ext4       Image
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.jffs2      Image--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.manifest   Image-qemuriscv32.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.tar.bz2    modules--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.tgz
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.wic.qcow2  modules-qemuriscv32.tgz
core-image-full-cmdline-qemuriscv32-20230425192407.testdata.json     u-boot.bin
core-image-full-cmdline-qemuriscv32.ext3                             u-boot.elf
core-image-full-cmdline-qemuriscv32.ext4                             u-boot-initial-env
core-image-full-cmdline-qemuriscv32.jffs2                            u-boot-initial-env-qemuriscv32
core-image-full-cmdline-qemuriscv32.manifest                         u-boot-initial-env-qemuriscv32-2023.04-r0
core-image-full-cmdline-qemuriscv32.qemuboot.conf                    u-boot-qemuriscv32-2023.04-r0.bin
core-image-full-cmdline-qemuriscv32.tar.bz2                          u-boot-qemuriscv32-2023.04-r0.elf
core-image-full-cmdline-qemuriscv32.testdata.json                    u-boot-qemuriscv32.bin
core-image-full-cmdline-qemuriscv32.wic.qcow2                        u-boot-qemuriscv32.elf
fw_dynamic.bin                                                       uImage
fw_dynamic.elf                                                       uImage--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.bin
fw_jump.bin                                                          uImage-qemuriscv32.bin

Congratulations! Now you have a working riscv32 image that can be loaded with qemu!

2. Loading the riscv32 image using qemu

With the riscv32 built, you can use the following command to load the image. Replace <path-to-image-dir> with the directory where the image is located and replace core-image-full-cmdline-qemuriscv32-XYZ.rootfs.ext4 with the appropriate name.

qemu-system-riscv32 -nographic -machine virt -m 1G -smp 8 \ 
  -bios <path-to-image-dir>/fw_jump.elf \
  -kernel <path-to-image-dir>/Image \
  -append "root=/dev/vda rw" -drive id=disk0,file=<path-to-image-dir>/core-image-full-cmdline-qemuriscv32-XYZ.rootfs.ext4,if=none,format=raw \
  -device virtio-net-device,netdev=usernet \
  -netdev user,id=usernet,hostfwd=tcp::10222-:22 \
  -device virtio-blk-device,drive=disk0 \
  -object rng-random,filename=/dev/urandom,id=rng0 \
  -device virtio-rng-pci,rng=rng0 \
  -device virtio-tablet-pci \
  -device virtio-keyboard-pci

Some caveats:

  • Memory is limited to 1G by the kernel. I couldn’t figure out why.
  • You may need to adjust the -smp value for your setup.
At the end of the previous step yocto suggests using the runqemu command to load the image and, while it loads the image, I had issues getting the network working using the tap0 interface. That’s why we are using usernet in this example.

The first boot should take a little while but eventually, you’ll be prompted with:

Poky (Yocto Project Reference Distro) 4.2 qemuriscv32 ttyS0

qemuriscv32 login:

By default, the root user has no password.

Once we log in, we need to set up the network and the system date, so we can clone the llvm repo. I also set up a swap file, due to our RAM limit.

I run the following script every time I log into the qemu image:

udhcpc -i eth0

service ntpd stop
ntpd -q -g
service ntpd start

swapon -s
swapoff -v /swapfile
swapon -s
rm -rf /swapfile
fallocate -l 8G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile

You should now be able to clone the llvm repository using https. You should be able to ssh into the image too, using ssh root@localhost -p10222 (the port is defined at the qemu cmd line), and copy files using scp -P10222 foo root@localhost (yes, it’s uppercase -P<port-number> here).

3. Building libc

While the support for riscv32 is under active development, the necessary patches are not yet upstream. For now, you will need to apply four patches to get the compilation working on riscv32:

  • Login fixes missing syscalls in 32 bit platforms
  • Login fixes compilation using UInt<128> (riscv32 has no uint128_t by default).
  • Login refactor the riscv abstraction layer to handle both riscv32 and riscv64.

On top of them, you’ll also need to add the following patch:

diff --git a/libc/cmake/modules/LLVMLibCArchitectures.cmake b/libc/cmake/modules/LLVMLibCArchitectures.cmake
index 2cd315b99e69..ee26f1949500 100644
--- a/libc/cmake/modules/LLVMLibCArchitectures.cmake
+++ b/libc/cmake/modules/LLVMLibCArchitectures.cmake
@@ -165,6 +165,9 @@ if(LIBC_TARGET_OS STREQUAL "baremetal")
   set(LIBC_TARGET_OS_IS_BAREMETAL TRUE)
 elseif(LIBC_TARGET_OS STREQUAL "linux")
   set(LIBC_TARGET_OS_IS_LINUX TRUE)
+elseif(LIBC_TARGET_OS STREQUAL "poky")
+  set(LIBC_TARGET_OS_IS_LINUX TRUE)
+  set(LIBC_TARGET_OS "linux")
 elseif(LIBC_TARGET_OS STREQUAL "darwin")
   set(LIBC_TARGET_OS_IS_DARWIN TRUE)
 elseif(LIBC_TARGET_OS STREQUAL "windows")

This is needed because cmake identifies the distro as poky, instead of Linux. I could not find an option in yocto to change that.

Full libc support in riscv32 is not supported yet, so remember to add -DLLVM_LIBC_FULL_BUILD=OFF to cmake when building it for riscv32.

Now just follow the instruction at The LLVM C Library — The LLVM C Library and get hacking!# Tutorial: testing libc in riscv32 qemu

The following is a tutorial on how to create a riscv32 image using yocto so we can build and test LLVM’s libc.

1. Creating the riscv32 image

We are using yocto to generate the riscv32 image since it allows us to include the compiler along with the image; in our case, we’ll be including not only gcc but also Clang.

Yocto provides a GUI called toaster to configure and build images, however, I had issues when building the final image (some Python errors), so we’ll be using the cmd line to configure and build our image.

1.1 Cloning the necessary repositories

We need to clone three repositories:

  • poky: a minimal Linux distribution that yocto uses to create the image with our custom configuration.
  • meta-oe: a layer of metadata that provides additional recipes and configuration files for the OpenEmbedded build system
  • meta-clang: a layer of metadata that provides support for building the Linux kernel and user space applications using the Clang/LLVM compiler toolchain, as an alternative to the more commonly used GCC compiler toolchain.
$ https://github.com/yoctoproject/poky.git
$ https://github.com/kraj/meta-clang.git
$ https://github.com/openembedded/meta-openembedded
The latest stable version of yocto is v4.1.3 langdale, however, I had several issues with the packages included in it, including several kernel panics when trying to compile libc, so I suggest using the ToT of each repo.

1.2 Configuring the image

Now that we cloned the repos, we need to configure poky with the custom packages we need to build libc.

$ cd poky/
$ source oe-init-build-env

Once you run these commands, you should now be in poky/build, which should contain two important files inside a conf directory:

  • conf/bblayers.conf: used to add the extra layers with recipes to build our image.
  • conf/local.conf: used to configure everything, from image size to the included packages.

In conf/bblayers.conf, you should include the path to the layers we just downloaded:

BBLAYERS = "\
  <path-to-poky-repo>/meta \
  <path-to-poky-repo>/meta-poky \
  <path-to-poky-repo>/meta-yocto-bsp \
  <path-to-meta-oe-repo>/meta-oe \
  <path-to-meta-oe-repo>/meta-python \
  <path-to-meta-oe-repo>/meta-networking \
  <path-to-meta-clang-repo> \
  "

In conf/local.conf, first, we need to configure yocto to include development tools, -dev packages and debug tools in the image. Search for the EXTRA_IMAGE_FEATURES variable and add tools-sdk dev-pkgs tools-debug. It should look like this:

EXTRA_IMAGE_FEATURES ?= "debug-tweaks tools-sdk dev-pkgs tools-debug"

You can also configure yocto to include profile tools, test tools, and source packages by changing this option. The conf file should have a list of available options when it’s first generated.

Second, we need to (at least) set the image size (variable IMAGE_ROOTFS_SIZE), set the target machine to riscv32 (variable MACHINE), and include the necessary packages to build clang (variable IMAGE_INSTALL:append). The following lines can be added to the end of the conf/local.conf. They included the mentioned changes.

# sets image size to around 50GB
IMAGE_ROOTFS_SIZE ?= " 52300000"
DISTRO="poky"
PACKAGE_CLASSES="package_rpm"

# sets the target machine
MACHINE="qemuriscv32"
SSTATE_DIR="${TOPDIR}/../sstate-cache"

# include the following packages in the final image
IMAGE_INSTALL:append=" git cmake ninja htop vim bash-completion python3 python3-pip ntp mpfr bison flex dtc clang gdb"
IMAGE_FSTYPES="ext3 jffs2 tar.bz2"
DL_DIR="${TOPDIR}/../downloads"

1.3. Building the image

Once everything is set up, we are ready to build the image. From poky/build:

$ bitbake core-image-full-cmdline

Now sit back and wait. It took me a little more than 1 hour in a 32-core E5-2620 v4 @ 2.10GHz machine.

In the end, the image files should be placed in poky/build/tmp/deploy/images/qemuriscv32/:

$ ls tmp/deploy/images/qemuriscv32/
core-image-full-cmdline.env                                          fw_jump.elf
core-image-full-cmdline-qemuriscv32-20230425192407.qemuboot.conf     fw_payload.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.ext3       fw_payload.elf
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.ext4       Image
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.jffs2      Image--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.manifest   Image-qemuriscv32.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.tar.bz2    modules--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.tgz
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.wic.qcow2  modules-qemuriscv32.tgz
core-image-full-cmdline-qemuriscv32-20230425192407.testdata.json     u-boot.bin
core-image-full-cmdline-qemuriscv32.ext3                             u-boot.elf
core-image-full-cmdline-qemuriscv32.ext4                             u-boot-initial-env
core-image-full-cmdline-qemuriscv32.jffs2                            u-boot-initial-env-qemuriscv32
core-image-full-cmdline-qemuriscv32.manifest                         u-boot-initial-env-qemuriscv32-2023.04-r0
core-image-full-cmdline-qemuriscv32.qemuboot.conf                    u-boot-qemuriscv32-2023.04-r0.bin
core-image-full-cmdline-qemuriscv32.tar.bz2                          u-boot-qemuriscv32-2023.04-r0.elf
core-image-full-cmdline-qemuriscv32.testdata.json                    u-boot-qemuriscv32.bin
core-image-full-cmdline-qemuriscv32.wic.qcow2                        u-boot-qemuriscv32.elf
fw_dynamic.bin                                                       uImage
fw_dynamic.elf                                                       uImage--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.bin
fw_jump.bin                                                          uImage-qemuriscv32.bin

Congratulations! Now you have a working riscv32 image that can be loaded with qemu!

2. Loading the riscv32 image using qemu

With the riscv32 built, you can use the following command to load the image. Replace <path-to-image-dir> with the directory where the image is located and replace core-image-full-cmdline-qemuriscv32-XYZ.rootfs.ext4 with the appropriate name.

qemu-system-riscv32 -nographic -machine virt -m 1G -smp 8 \ 
-bios <path-to-image-dir>/fw_jump.elf \
-kernel <path-to-image-dir>/Image \
-append "root=/dev/vda rw" -drive \ 
id=disk0,file=<path-to-image-dir>/core-image-full-cmdline-qemuriscv32-XYZ.rootfs.ext4,if=none,format=raw \
-device virtio-net-device,netdev=usernet \
-netdev user,id=usernet,hostfwd=tcp::10222-:22 \
-device virtio-blk-device,drive=disk0 \
-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-pci,rng=rng0 \
-device virtio-tablet-pci \
-device virtio-keyboard-pci

Some caveats:

  • Memory is limited to 1G by the kernel. I couldn’t figure out why.
  • You may need to adjust the -smp value for your setup.
At the end of the previous step yocto suggests using the runqemu command to load the image and, while it loads the image, I had issues getting the network working using the tap0 interface. That’s why we are using usernet in this example.

The first boot should take a little while but eventually, you’ll be prompted with:

Poky (Yocto Project Reference Distro) 4.2 qemuriscv32 ttyS0

qemuriscv32 login:

By default, the root user has no password.

Once we log in, we need to set up the network and the system date, so we can clone the llvm repo. I also set up a swap file, due to our RAM limit.

I run the following script every time I log into the qemu image:

udhcpc -i eth0

service ntpd stop
ntpd -q -g
service ntpd start

swapon -s
swapoff -v /swapfile
swapon -s
rm -rf /swapfile
fallocate -l 8G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile

You should now be able to clone the llvm repository using https. You should be able to ssh into the image too, using ssh root@localhost -p10222 (the port is defined at the qemu cmd line), and copy files using scp -P10222 foo root@localhost (yes, it’s uppercase -P<port-number> here).

3. Building libc

While the support for riscv32 is under active development, the necessary patches are not yet upstream. For now, you will need to apply four patches to get the compilation working on riscv32:

  • Login fixes missing syscalls in 32 bit platforms
  • Login fixes compilation using UInt<128> (riscv32 has no uint128_t by default).
  • Login refactor the riscv abstraction layer to handle both riscv32 and riscv64.

On top of them, you’ll also need to add the following patch:

diff --git a/libc/cmake/modules/LLVMLibCArchitectures.cmake b/libc/cmake/modules/LLVMLibCArchitectures.cmake
index 2cd315b99e69..ee26f1949500 100644
--- a/libc/cmake/modules/LLVMLibCArchitectures.cmake
+++ b/libc/cmake/modules/LLVMLibCArchitectures.cmake
@@ -165,6 +165,9 @@ if(LIBC_TARGET_OS STREQUAL "baremetal")
   set(LIBC_TARGET_OS_IS_BAREMETAL TRUE)
 elseif(LIBC_TARGET_OS STREQUAL "linux")
   set(LIBC_TARGET_OS_IS_LINUX TRUE)
+elseif(LIBC_TARGET_OS STREQUAL "poky")
+  set(LIBC_TARGET_OS_IS_LINUX TRUE)
+  set(LIBC_TARGET_OS "linux")
 elseif(LIBC_TARGET_OS STREQUAL "darwin")
   set(LIBC_TARGET_OS_IS_DARWIN TRUE)
 elseif(LIBC_TARGET_OS STREQUAL "windows")

This is needed because cmake identifies the distro as poky, instead of Linux. I could not find an option in yocto to change that.

Full libc support in riscv32 is not supported yet, so remember to add -DLLVM_LIBC_FULL_BUILD=OFF to cmake when building it for riscv32.

Now just follow the instruction at The LLVM C Library — The LLVM C Library and get hacking!

2 Likes

Thanks a lot for this post. We will work through your patches soon and set up a RISCV32 builder after that.

What’s the advantage of a full distro and system emulation compared to qemu-user?

I haven’t looked at the tests, do they expect/require a “full system” for context? I’d guess they’re less portable than typical lit tests used by LLVM or libc++ even?

I am looking in this thread because just setting CMAKE_CROSSCOMPILING_EMULATOR to a qemu-user binary wasn’t sufficient for me to be able to run the tests. Is anyone using it to test one of the libc configurations?

Blockquote I haven’t looked at the tests, do they expect/require a “full system” for context? I’d guess they’re less portable than typical lit tests used by LLVM or libc++ even?

During my tests, qemu-user returned different errors on some syscalls than running the same tests in qemu-system.

Not exactly wrong results, e.g., in a syscall that could fail with EAGAIN and EINVAL, the kernel would always return EAGAIN before returning EINVAL, while the other way around would happen with qemu-user. Not exactly wrong I would say, but it would make some of our tests fail.

Blockquote I am looking in this thread because just setting CMAKE_CROSSCOMPILING_EMULATOR to a qemu-user binary wasn’t sufficient for me to be able to run the tests. Is anyone using it to test one of the libc configurations?

Weird, CMAKE_CROSSCOMPILING_EMULATOR was enough for me to run the tests.

But yeah, we’ll be using CMAKE_CROSSCOMPILING_EMULATOR + qemu-system on the new rv32 buildbot. CMAKE_CROSSCOMPILING_EMULATOR is set to a script that copies the tests (and testdata), and runs the tests using ssh remote commands.

The rv32 buildbot is currently on staging here and requires ⚙ D148797 [libc] Start to refactor riscv platform abstraction to support both 32 and 64 bits versions to properly build, but soon it should be running all the tests enabled with -DLLVM_LIBC_FULL_BUILD=OFF

Okay maybe I should re-check what I’m doing wrong, then. I was able to patch some of the add_custom_commands with an explicit CMAKE_CROSSCOMPILING_EMULATOR in libc/cmake/modules/LLVMLibCTestRules.cmake to get the tests running.

Also - I’m running the runtimes cross build / full cross build configuration FWIW.

Hmm that’s interesting. And I did notice some tests of exec*() that I would expect to fail w/qemu-user. I was hoping we could find a subset of expected-to-work-in-qemu-user tests.

Okay maybe I should re-check what I’m doing wrong, then. I was able to patch some of the add_custom_commands with an explicit CMAKE_CROSSCOMPILING_EMULATOR in libc/cmake/modules/LLVMLibCTestRules.cmake to get the tests running.

Also - I’m running the runtimes cross build / full cross build configuration FWIW.

Yeah… you’ll need https://github.com/llvm/llvm-project/pull/66565 to run the hermetic and integration tests.

(also [libc] Fix pthread_create_test for 32 bit systems by mikhailramalho · Pull Request #66564 · llvm/llvm-project · GitHub to fix one of the integration tests).

Right now the current state of the full cross-build configuration is that we can build libc, but the tests are failing with a linking error:

ld.lld: error: undefined symbol: __udivdi3
>>> referenced by str_to_integer.h:138 (/home/mgadelha/tools/llvm-project/libc/src/__support/str_to_integer.h:138)
>>>               parser.cpp.o:(__llvm_libc::StrToNumResult<int> __llvm_libc::internal::strtointeger<int>(char const*, int)) in archive projects/libc/test/integration/src/stdio/liblibc.test.integration.src.stdio.sprintf_size_test.libc.a
>>> referenced by UInt.h:415 (/home/mgadelha/tools/llvm-project/libc/src/__support/UInt.h:415)
>>>               converter.cpp.o:(__llvm_libc::cpp::BigInt<448u, false>::div_uint32_times_pow_2(unsigned int, unsigned int)) in archive projects/libc/test/integration/src/stdio/liblibc.test.integration.src.stdio.sprintf_size_test.libc.a
>>> referenced by UInt.h:417 (/home/mgadelha/tools/llvm-project/libc/src/__support/UInt.h:417)
>>>               converter.cpp.o:(__llvm_libc::cpp::BigInt<448u, false>::div_uint32_times_pow_2(unsigned int, unsigned int)) in archive projects/libc/test/integration/src/stdio/liblibc.test.integration.src.stdio.sprintf_size_test.libc.a
>>> referenced 2 more times

ld.lld: error: undefined symbol: __umoddi3
>>> referenced by UInt.h:415 (/home/mgadelha/tools/llvm-project/libc/src/__support/UInt.h:415)
>>>               converter.cpp.o:(__llvm_libc::cpp::BigInt<448u, false>::div_uint32_times_pow_2(unsigned int, unsigned int)) in archive projects/libc/test/integration/src/stdio/liblibc.test.integration.src.stdio.sprintf_size_test.libc.a
>>> referenced by UInt.h:417 (/home/mgadelha/tools/llvm-project/libc/src/__support/UInt.h:417)
>>>               converter.cpp.o:(__llvm_libc::cpp::BigInt<448u, false>::div_uint32_times_pow_2(unsigned int, unsigned int)) in archive projects/libc/test/integration/src/stdio/liblibc.test.integration.src.stdio.sprintf_size_test.libc.a
>>> referenced by UInt.h:418 (/home/mgadelha/tools/llvm-project/libc/src/__support/UInt.h:418)
>>>               converter.cpp.o:(__llvm_libc::cpp::BigInt<448u, false>::div_uint32_times_pow_2(unsigned int, unsigned int)) in archive projects/libc/test/integration/src/stdio/liblibc.test.integration.src.stdio.sprintf_size_test.libc.a
>>> referenced 1 more times
clang++: error: linker command failed with exit code 1 (use -v to see invocation)

and

ld.lld: error: undefined symbol: operator delete(void*, std::align_val_t)
>>> referenced by copysignl_test.cpp:13 (/home/mgadelha/tools/llvm-project/libc/test/src/math/copysignl_test.cpp:13)
>>>               projects/libc/test/src/math/CMakeFiles/libc.test.src.math.copysignl_test.__hermetic__.__build__.dir/copysignl_test.cpp.o:(LlvmLibcCopySignTest_SpecialNumbers::~LlvmLibcCopySignTest_SpecialNumbers())
>>> referenced by copysignl_test.cpp:13 (/home/mgadelha/tools/llvm-project/libc/test/src/math/copysignl_test.cpp:13)
>>>               projects/libc/test/src/math/CMakeFiles/libc.test.src.math.copysignl_test.__hermetic__.__build__.dir/copysignl_test.cpp.o:(LlvmLibcCopySignTest_Range::~LlvmLibcCopySignTest_Range())
>>> referenced by FPMatcher.h:24 (/home/mgadelha/tools/llvm-project/libc/test/UnitTest/FPMatcher.h:24)
>>>               projects/libc/test/src/math/CMakeFiles/libc.test.src.math.copysignl_test.__hermetic__.__build__.dir/copysignl_test.cpp.o:(__llvm_libc::testing::FPMatcher<long double, (__llvm_libc::testing::TestCond)0>::~FPMatcher())

In fact I did need those. Thanks, that was timely.

I guess we may encounter some of the same 32-bit specific bugs like these. There’s a few warnings I haven’t gotten around to addressing yet relating to differences due to 32-bit types, for example.

I’m guessing that the compiler is emitting builtins and the test link passes -fno-builtin? Either the driver is overridding/ignoring the -fno-builtin in the compilation case and preserving it in the linker? Or the cmake recipe is adding it in one case and omitting in the other.

The compiler was actually generating these calls because there is not 64-bit div in rv32; these functions are defined in libgcc/compiler-rt but they were not being linked due to -nostdlib. I just landed a fix for the integration tests and they should all be passing now.

I’m also testing a fix for the operator delete linking error in the hermetic tests, if it works I’ll submit a PR later today

I guess we may encounter some of the same 32-bit specific bugs like these. There’s a few warnings I haven’t gotten around to addressing yet relating to differences due to 32-bit types, for example.

There is one warning in particular that has been bothering me for a while but I still didn’t find the time to check it:

/home/mgadelha/tools/llvm-project/libc/src/sys/mman/linux/mmap.cpp:44:56: warning: implicit conversion loses integer precision: 'off_t' (aka 'long long') to 'long' [-Wshorten-64-to-32]
   43 |       __llvm_libc::syscall_impl(syscall_number, reinterpret_cast<long>(addr),
      |       ~~~~~~~~~~~
   44 |                                 size, prot, flags, fd, offset);
      |                                                        ^~~~~~
1 warning generated.

I guess we don’t have tests mmap with such a large offset to trigger an error, but I would expect it to fail in such cases.