Tutorial: testing libc in riscv32 qemu

The following is a tutorial on how to create a riscv32 image using yocto so we can build and test LLVM’s libc.

1. Creating the riscv32 image

We are using yocto to generate the riscv32 image since it allows us to include the compiler along with the image; in our case, we’ll be including not only gcc but also Clang.

Yocto provides a GUI called toaster to configure and build images, however, I had issues when building the final image (some Python errors), so we’ll be using the cmd line to configure and build our image.

1.1 Cloning the necessary repositories

We need to clone three repositories:

  • poky: a minimal Linux distribution that yocto uses to create the image with our custom configuration.
  • meta-oe: a layer of metadata that provides additional recipes and configuration files for the OpenEmbedded build system
  • meta-clang: a layer of metadata that provides support for building the Linux kernel and user space applications using the Clang/LLVM compiler toolchain, as an alternative to the more commonly used GCC compiler toolchain.
git clone https://github.com/yoctoproject/poky.git
git clone https://github.com/kraj/meta-clang.git
git clone https://github.com/openembedded/meta-openembedded
The latest stable version of yocto is v4.1.3 langdale, however, I had several issues with the packages included in it, including several kernel panics when trying to compile libc, so I suggest using the ToT of each repo.

1.2 Configuring the image

Now that we cloned the repos, we need to configure poky with the custom packages we need to build libc.

cd poky/
source oe-init-build-env

Once you run these commands, you should now be in poky/build, which should contain two important files inside a conf directory:

  • conf/bblayers.conf: used to add the extra layers with recipes to build our image.
  • conf/local.conf: used to configure everything, from image size to the included packages.

In conf/bblayers.conf, you should include the path to the layers we just downloaded:

BBLAYERS = "\
  <path-to-poky-repo>/meta \
  <path-to-poky-repo>/meta-poky \
  <path-to-poky-repo>/meta-yocto-bsp \
  <path-to-meta-oe-repo>/meta-oe \
  <path-to-meta-oe-repo>/meta-python \
  <path-to-meta-oe-repo>/meta-networking \
  <path-to-meta-clang-repo> \
  "

In conf/local.conf, first, we need to configure yocto to include development tools, -dev packages and debug tools in the image. Search for the EXTRA_IMAGE_FEATURES variable and add tools-sdk dev-pkgs tools-debug. It should look like this:

EXTRA_IMAGE_FEATURES ?= "debug-tweaks tools-sdk dev-pkgs tools-debug"

You can also configure yocto to include profile tools, test tools, and source packages by changing this option. The conf file should have a list of available options when it’s first generated.

Second, we need to (at least) set the image size (variable IMAGE_ROOTFS_SIZE), set the target machine to riscv32 (variable MACHINE), and include the necessary packages to build clang (variable IMAGE_INSTALL:append). The following lines can be added to the end of the conf/local.conf. They included the mentioned changes.

 # sets image size to around 50GB
IMAGE_ROOTFS_SIZE ?= " 52300000"
DISTRO="poky"
PACKAGE_CLASSES="package_rpm"

# sets the target machine
MACHINE="qemuriscv32"
SSTATE_DIR="${TOPDIR}/../sstate-cache"

# include the following packages in the final image
IMAGE_INSTALL:append=" git cmake ninja htop vim bash-completion python3 python3-pip ntp mpfr bison flex dtc clang gdb"
IMAGE_FSTYPES="ext3 jffs2 tar.bz2"
DL_DIR="${TOPDIR}/../downloads"

1.3. Building the image

Once everything is set up, we are ready to build the image. From poky/build:

$ bitbake core-image-full-cmdline

Now sit back and wait. It took me a little more than 1 hour on a 32-core E5-2620 v4 @ 2.10GHz machine.

In the end, the image files should be placed in poky/build/tmp/deploy/images/qemuriscv32/:

$ ls tmp/deploy/images/qemuriscv32/
core-image-full-cmdline.env                                          fw_jump.elf
core-image-full-cmdline-qemuriscv32-20230425192407.qemuboot.conf     fw_payload.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.ext3       fw_payload.elf
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.ext4       Image
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.jffs2      Image--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.manifest   Image-qemuriscv32.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.tar.bz2    modules--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.tgz
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.wic.qcow2  modules-qemuriscv32.tgz
core-image-full-cmdline-qemuriscv32-20230425192407.testdata.json     u-boot.bin
core-image-full-cmdline-qemuriscv32.ext3                             u-boot.elf
core-image-full-cmdline-qemuriscv32.ext4                             u-boot-initial-env
core-image-full-cmdline-qemuriscv32.jffs2                            u-boot-initial-env-qemuriscv32
core-image-full-cmdline-qemuriscv32.manifest                         u-boot-initial-env-qemuriscv32-2023.04-r0
core-image-full-cmdline-qemuriscv32.qemuboot.conf                    u-boot-qemuriscv32-2023.04-r0.bin
core-image-full-cmdline-qemuriscv32.tar.bz2                          u-boot-qemuriscv32-2023.04-r0.elf
core-image-full-cmdline-qemuriscv32.testdata.json                    u-boot-qemuriscv32.bin
core-image-full-cmdline-qemuriscv32.wic.qcow2                        u-boot-qemuriscv32.elf
fw_dynamic.bin                                                       uImage
fw_dynamic.elf                                                       uImage--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.bin
fw_jump.bin                                                          uImage-qemuriscv32.bin

Congratulations! Now you have a working riscv32 image that can be loaded with qemu!

2. Loading the riscv32 image using qemu

With the riscv32 built, you can use the following command to load the image. Replace <path-to-image-dir> with the directory where the image is located and replace core-image-full-cmdline-qemuriscv32-XYZ.rootfs.ext4 with the appropriate name.

qemu-system-riscv32 -nographic -machine virt -m 1G -smp 8 \ 
  -bios <path-to-image-dir>/fw_jump.elf \
  -kernel <path-to-image-dir>/Image \
  -append "root=/dev/vda rw" -drive id=disk0,file=<path-to-image-dir>/core-image-full-cmdline-qemuriscv32-XYZ.rootfs.ext4,if=none,format=raw \
  -device virtio-net-device,netdev=usernet \
  -netdev user,id=usernet,hostfwd=tcp::10222-:22 \
  -device virtio-blk-device,drive=disk0 \
  -object rng-random,filename=/dev/urandom,id=rng0 \
  -device virtio-rng-pci,rng=rng0 \
  -device virtio-tablet-pci \
  -device virtio-keyboard-pci

Some caveats:

  • Memory is limited to 1G by the kernel. I couldn’t figure out why.
  • You may need to adjust the -smp value for your setup.
At the end of the previous step yocto suggests using the runqemu command to load the image and, while it loads the image, I had issues getting the network working using the tap0 interface. That’s why we are using usernet in this example.

The first boot should take a little while but eventually, you’ll be prompted with:

Poky (Yocto Project Reference Distro) 4.2 qemuriscv32 ttyS0

qemuriscv32 login:

By default, the root user has no password.

Once we log in, we need to set up the network and the system date, so we can clone the llvm repo. I also set up a swap file, due to our RAM limit.

I run the following script every time I log into the qemu image:

udhcpc -i eth0

service ntpd stop
ntpd -q -g
service ntpd start

swapon -s
swapoff -v /swapfile
swapon -s
rm -rf /swapfile
fallocate -l 8G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile

You should now be able to clone the llvm repository using https. You should be able to ssh into the image too, using ssh root@localhost -p10222 (the port is defined at the qemu cmd line), and copy files using scp -P10222 foo root@localhost (yes, it’s uppercase -P<port-number> here).

3. Building libc

While the support for riscv32 is under active development, the necessary patches are not yet upstream. For now, you will need to apply four patches to get the compilation working on riscv32:

  • Login fixes missing syscalls in 32 bit platforms
  • Login fixes compilation using UInt<128> (riscv32 has no uint128_t by default).
  • Login refactor the riscv abstraction layer to handle both riscv32 and riscv64.

On top of them, you’ll also need to add the following patch:

diff --git a/libc/cmake/modules/LLVMLibCArchitectures.cmake b/libc/cmake/modules/LLVMLibCArchitectures.cmake
index 2cd315b99e69..ee26f1949500 100644
--- a/libc/cmake/modules/LLVMLibCArchitectures.cmake
+++ b/libc/cmake/modules/LLVMLibCArchitectures.cmake
@@ -165,6 +165,9 @@ if(LIBC_TARGET_OS STREQUAL "baremetal")
   set(LIBC_TARGET_OS_IS_BAREMETAL TRUE)
 elseif(LIBC_TARGET_OS STREQUAL "linux")
   set(LIBC_TARGET_OS_IS_LINUX TRUE)
+elseif(LIBC_TARGET_OS STREQUAL "poky")
+  set(LIBC_TARGET_OS_IS_LINUX TRUE)
+  set(LIBC_TARGET_OS "linux")
 elseif(LIBC_TARGET_OS STREQUAL "darwin")
   set(LIBC_TARGET_OS_IS_DARWIN TRUE)
 elseif(LIBC_TARGET_OS STREQUAL "windows")

This is needed because cmake identifies the distro as poky, instead of Linux. I could not find an option in yocto to change that.

Full libc support in riscv32 is not supported yet, so remember to add -DLLVM_LIBC_FULL_BUILD=OFF to cmake when building it for riscv32.

Now just follow the instruction at The LLVM C Library — The LLVM C Library and get hacking!# Tutorial: testing libc in riscv32 qemu

The following is a tutorial on how to create a riscv32 image using yocto so we can build and test LLVM’s libc.

1. Creating the riscv32 image

We are using yocto to generate the riscv32 image since it allows us to include the compiler along with the image; in our case, we’ll be including not only gcc but also Clang.

Yocto provides a GUI called toaster to configure and build images, however, I had issues when building the final image (some Python errors), so we’ll be using the cmd line to configure and build our image.

1.1 Cloning the necessary repositories

We need to clone three repositories:

  • poky: a minimal Linux distribution that yocto uses to create the image with our custom configuration.
  • meta-oe: a layer of metadata that provides additional recipes and configuration files for the OpenEmbedded build system
  • meta-clang: a layer of metadata that provides support for building the Linux kernel and user space applications using the Clang/LLVM compiler toolchain, as an alternative to the more commonly used GCC compiler toolchain.
$ https://github.com/yoctoproject/poky.git
$ https://github.com/kraj/meta-clang.git
$ https://github.com/openembedded/meta-openembedded
The latest stable version of yocto is v4.1.3 langdale, however, I had several issues with the packages included in it, including several kernel panics when trying to compile libc, so I suggest using the ToT of each repo.

1.2 Configuring the image

Now that we cloned the repos, we need to configure poky with the custom packages we need to build libc.

$ cd poky/
$ source oe-init-build-env

Once you run these commands, you should now be in poky/build, which should contain two important files inside a conf directory:

  • conf/bblayers.conf: used to add the extra layers with recipes to build our image.
  • conf/local.conf: used to configure everything, from image size to the included packages.

In conf/bblayers.conf, you should include the path to the layers we just downloaded:

BBLAYERS = "\
  <path-to-poky-repo>/meta \
  <path-to-poky-repo>/meta-poky \
  <path-to-poky-repo>/meta-yocto-bsp \
  <path-to-meta-oe-repo>/meta-oe \
  <path-to-meta-oe-repo>/meta-python \
  <path-to-meta-oe-repo>/meta-networking \
  <path-to-meta-clang-repo> \
  "

In conf/local.conf, first, we need to configure yocto to include development tools, -dev packages and debug tools in the image. Search for the EXTRA_IMAGE_FEATURES variable and add tools-sdk dev-pkgs tools-debug. It should look like this:

EXTRA_IMAGE_FEATURES ?= "debug-tweaks tools-sdk dev-pkgs tools-debug"

You can also configure yocto to include profile tools, test tools, and source packages by changing this option. The conf file should have a list of available options when it’s first generated.

Second, we need to (at least) set the image size (variable IMAGE_ROOTFS_SIZE), set the target machine to riscv32 (variable MACHINE), and include the necessary packages to build clang (variable IMAGE_INSTALL:append). The following lines can be added to the end of the conf/local.conf. They included the mentioned changes.

# sets image size to around 50GB
IMAGE_ROOTFS_SIZE ?= " 52300000"
DISTRO="poky"
PACKAGE_CLASSES="package_rpm"

# sets the target machine
MACHINE="qemuriscv32"
SSTATE_DIR="${TOPDIR}/../sstate-cache"

# include the following packages in the final image
IMAGE_INSTALL:append=" git cmake ninja htop vim bash-completion python3 python3-pip ntp mpfr bison flex dtc clang gdb"
IMAGE_FSTYPES="ext3 jffs2 tar.bz2"
DL_DIR="${TOPDIR}/../downloads"

1.3. Building the image

Once everything is set up, we are ready to build the image. From poky/build:

$ bitbake core-image-full-cmdline

Now sit back and wait. It took me a little more than 1 hour in a 32-core E5-2620 v4 @ 2.10GHz machine.

In the end, the image files should be placed in poky/build/tmp/deploy/images/qemuriscv32/:

$ ls tmp/deploy/images/qemuriscv32/
core-image-full-cmdline.env                                          fw_jump.elf
core-image-full-cmdline-qemuriscv32-20230425192407.qemuboot.conf     fw_payload.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.ext3       fw_payload.elf
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.ext4       Image
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.jffs2      Image--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.manifest   Image-qemuriscv32.bin
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.tar.bz2    modules--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.tgz
core-image-full-cmdline-qemuriscv32-20230425192407.rootfs.wic.qcow2  modules-qemuriscv32.tgz
core-image-full-cmdline-qemuriscv32-20230425192407.testdata.json     u-boot.bin
core-image-full-cmdline-qemuriscv32.ext3                             u-boot.elf
core-image-full-cmdline-qemuriscv32.ext4                             u-boot-initial-env
core-image-full-cmdline-qemuriscv32.jffs2                            u-boot-initial-env-qemuriscv32
core-image-full-cmdline-qemuriscv32.manifest                         u-boot-initial-env-qemuriscv32-2023.04-r0
core-image-full-cmdline-qemuriscv32.qemuboot.conf                    u-boot-qemuriscv32-2023.04-r0.bin
core-image-full-cmdline-qemuriscv32.tar.bz2                          u-boot-qemuriscv32-2023.04-r0.elf
core-image-full-cmdline-qemuriscv32.testdata.json                    u-boot-qemuriscv32.bin
core-image-full-cmdline-qemuriscv32.wic.qcow2                        u-boot-qemuriscv32.elf
fw_dynamic.bin                                                       uImage
fw_dynamic.elf                                                       uImage--6.1.20+git0+a8881762b5_423e199669-r0-qemuriscv32-20230425192407.bin
fw_jump.bin                                                          uImage-qemuriscv32.bin

Congratulations! Now you have a working riscv32 image that can be loaded with qemu!

2. Loading the riscv32 image using qemu

With the riscv32 built, you can use the following command to load the image. Replace <path-to-image-dir> with the directory where the image is located and replace core-image-full-cmdline-qemuriscv32-XYZ.rootfs.ext4 with the appropriate name.

qemu-system-riscv32 -nographic -machine virt -m 1G -smp 8 \ 
-bios <path-to-image-dir>/fw_jump.elf \
-kernel <path-to-image-dir>/Image \
-append "root=/dev/vda rw" -drive \ 
id=disk0,file=<path-to-image-dir>/core-image-full-cmdline-qemuriscv32-XYZ.rootfs.ext4,if=none,format=raw \
-device virtio-net-device,netdev=usernet \
-netdev user,id=usernet,hostfwd=tcp::10222-:22 \
-device virtio-blk-device,drive=disk0 \
-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-pci,rng=rng0 \
-device virtio-tablet-pci \
-device virtio-keyboard-pci

Some caveats:

  • Memory is limited to 1G by the kernel. I couldn’t figure out why.
  • You may need to adjust the -smp value for your setup.
At the end of the previous step yocto suggests using the runqemu command to load the image and, while it loads the image, I had issues getting the network working using the tap0 interface. That’s why we are using usernet in this example.

The first boot should take a little while but eventually, you’ll be prompted with:

Poky (Yocto Project Reference Distro) 4.2 qemuriscv32 ttyS0

qemuriscv32 login:

By default, the root user has no password.

Once we log in, we need to set up the network and the system date, so we can clone the llvm repo. I also set up a swap file, due to our RAM limit.

I run the following script every time I log into the qemu image:

udhcpc -i eth0

service ntpd stop
ntpd -q -g
service ntpd start

swapon -s
swapoff -v /swapfile
swapon -s
rm -rf /swapfile
fallocate -l 8G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile

You should now be able to clone the llvm repository using https. You should be able to ssh into the image too, using ssh root@localhost -p10222 (the port is defined at the qemu cmd line), and copy files using scp -P10222 foo root@localhost (yes, it’s uppercase -P<port-number> here).

3. Building libc

While the support for riscv32 is under active development, the necessary patches are not yet upstream. For now, you will need to apply four patches to get the compilation working on riscv32:

  • Login fixes missing syscalls in 32 bit platforms
  • Login fixes compilation using UInt<128> (riscv32 has no uint128_t by default).
  • Login refactor the riscv abstraction layer to handle both riscv32 and riscv64.

On top of them, you’ll also need to add the following patch:

diff --git a/libc/cmake/modules/LLVMLibCArchitectures.cmake b/libc/cmake/modules/LLVMLibCArchitectures.cmake
index 2cd315b99e69..ee26f1949500 100644
--- a/libc/cmake/modules/LLVMLibCArchitectures.cmake
+++ b/libc/cmake/modules/LLVMLibCArchitectures.cmake
@@ -165,6 +165,9 @@ if(LIBC_TARGET_OS STREQUAL "baremetal")
   set(LIBC_TARGET_OS_IS_BAREMETAL TRUE)
 elseif(LIBC_TARGET_OS STREQUAL "linux")
   set(LIBC_TARGET_OS_IS_LINUX TRUE)
+elseif(LIBC_TARGET_OS STREQUAL "poky")
+  set(LIBC_TARGET_OS_IS_LINUX TRUE)
+  set(LIBC_TARGET_OS "linux")
 elseif(LIBC_TARGET_OS STREQUAL "darwin")
   set(LIBC_TARGET_OS_IS_DARWIN TRUE)
 elseif(LIBC_TARGET_OS STREQUAL "windows")

This is needed because cmake identifies the distro as poky, instead of Linux. I could not find an option in yocto to change that.

Full libc support in riscv32 is not supported yet, so remember to add -DLLVM_LIBC_FULL_BUILD=OFF to cmake when building it for riscv32.

Now just follow the instruction at The LLVM C Library — The LLVM C Library and get hacking!

2 Likes

Thanks a lot for this post. We will work through your patches soon and set up a RISCV32 builder after that.

What’s the advantage of a full distro and system emulation compared to qemu-user?