Hi Folks,
I’m working on a port of musl libc to macos (arch triple is “x86_64-xnu-musl”) to solve some irreconcilable issues I’m having with libSystem.dylib. I don’t want to use glibc for various reasons, mainly because I want to static link. I have static PIE + ASLR working which is not actually supported by the Apple toolchain (*1), but I managed to get it to work. I’m sure Apple might say “Don’t do that”, but from looking at the history of the xnu kernel ABI, it seems to be very stable between versions.
I am from Apple, and I will say “Don’t do that.” The kernel ABI for our platforms is not stable, we only guarantee stability at the dynamic link boundary (in this case public symbols exported from libSystem). While the kernel syscall numbers have not changed (though the kernel team reserves the right to do that), the parameter lists and argument marshaling for them certainly has changed. We also do not support static executables on our system.
We even had bincompat issues related to this i rolled their during the last major release (macOS 10.12 Sierra): Go implemented its own syscall support, which caused all of their binaries that used gettimeofday since the internal interface changed <https://github.com/golang/go/issues/16606>. More broadly you can look at a discussion of their issues here: <https://github.com/golang/go/issues/16606>. In their case they want to avoid invoking an external linker like ld64 or lld as opposed to avoiding libSystem, but the effect is the same, they shipped a tool that caused unsuspecting developers to have surprise bincompat issues that were entirely avoidable.
I’m aware that Go has had some issues with the XNU ABI boundary.
I’m working on a CPU simulator / binary translator and I need control of the process address space layout. It seems I may ultimately need to use Hypervisor.framework however that is a lot more work in the short term.
I actually thought about mentioning Hypervisor.framework, but I was not sure about your use case. If you need full control of the address space (PAGE_ZERO control, overriding the shared cache mappings, etc) that is really the only supported mechanism.
The issue I am having with libSystem.dylib is the lack of weak linkage (versus weak_import) i.e. weak aliases. I don’t want to use a wrapper binary with DYLD_INSERT_LIBRARIES. I want to interpose Libc symbols with some of the symbols present in my binary (memory allocator, mmap). Interposition support is somewhat lacking in the Mach-O toolchain and runtime linker despite the Mach-O format technically supporting what I need (N_INDR and N_WEAK_DEF).
Dyld does not generally use nlists at runtime except for things like dladdr(), and has not for the last 10 years or so. Instead dyld uses a trie to publish exports, and and a small byte code language to describe binding imports. We still support using nlists for old binaries, but anything built with recent tools also contains the newer trie and bind op codes which will be used if they are available. I do not think our tries can express the sort of import semantics you want.
I see two ways of potentially doing it (short of hypervisor.framework). Both of them are a bit gross and have some bincompat risks, but given you are an open source project and can rev if need be that may not be an enormous issue:
You could specify a custom segment in your executable with zero file size and a vm size the blocks out the address range you need and then unmap it. There may be practical limits on it that prevent you from achieving what you want.
That’s an interesting approach worth exploring. I wasn’t sure I could unmap the zero page at runtime.
I’m currently using -Wl,-pagezero_size,0x1000 which frees up the lower address space but of course the Libc allocator starts using this address space; the default on x86_64 with ld64 is a 4GiB zero page and that is where Libc normally allocates. I believe Libc is passing NULL for the address hint to mmap or vm_allocate and is getting the default address returned by the kernel, so it allocates at the lowest address possible. I’m likely going to have a similar problem if I unmap a region after start up and then call a Libc function that allocates using one of the internal zones. Even when I replaced the default malloc zone, it seems there were already other zones created and appeared to be used internally by Libc.
In fact, during this process, I’ve been working on minimising my Libc footprint, which is obviously required if I want to run in Hypervisor.framework. The early CRT initialisation hooks for C++ and image relocation stuff will be required, however I may well end up with a tiny stub that loads an ELF image if I do in fact use Hypervisor.framework. I’m using C++ so I can use vector, map, string, shared_ptr, etc. I find that using vector, map, string, shared_ptr are much safer than traditional C and raw pointers.
You could use implicit interposing. This is a feature add so that ASAN binaries can avoid the the whole re-exec with DYLD_INSERT_LIBRARIES issue. It is not guaranteed to be stable, but in practice it is probably the most stable option short of using a hypervisor. The way it works:
I saw the ASAN interposition patch that avoids DYLD_INSERT_LIBRARIES however I was not sure how it worked.
Define all the symbols in a dylib along with an interpose section (as though you were going to load it with DYLD_INSERT_LIBRARIES). Directly link your executable to binary. Dyld will discover the interpose during dependency analysis (before libSystem initializers are run) and apply the interpose. This has only been tested in the case of our sanitizer runtimes, but it SHOULD work.
This might be an approach worth exploring, as I can then interpose vm_allocate and/or mmap to add an address hint to coax Libc into using a reserved area of memory. I actually tried to get this to work but my interposed functions were not called as they were in the main executable. e.g.
https://opensource.apple.com/source/dyld/dyld-433.5/include/mach-o/dyld-interposing.h.auto.html
So I guess I need a dependency on an additional dylib which has my interposed functions. It’s a pity dyld only searches dylibs and not the main executable for interpose sections (as it didn’t appear to work with an interpose section in the main executable).
If I could use N_INDR and N_WEAK_DEF to have early bound (runtime link time) interposition with symbols in my binary replacing the C library allocator and mmap, and have libSystem use my implementations then I would be happy. libSystem itself would need to use weak aliases. This is possible with C libraries on other platforms.
I’ve tried relentlessly to intercept the malloc_zone implementation. malloc_zone_register is not sufficient as some of the internal zones are tied to the internals of Libc and I am getting heap collisions with Libc allocated objects and my guest address space. On Linux I have enough control to do what I need and can interpose my symbols to implement versions of libc functions that I wish to override. The problem on darwin is that I am not able to interpose the malloc implementation until main starts, and at that point it is tool late as the C library already has created its internal zones. I’m also unable to interpose mmap. I have already looked at the interpose symbol tricks but they don’t meet my purposes (not wanting to re-exec with DYLD_INSERT_LIBRARIES). Weak aliases from libSystem to the allocator implementation and various public symbols along with N_INDR and N_WEAK_DEF would be required for me to achieve what I need to achieve (somewhat similarly to the elegant internal implementation of musl libc).
With my current solution (musl on xnu) I have successfully reserved 0x1000 – 0x7fff_0000_0000. Essentially the low 128TiB minus 4GiB at the top of the address space where I place my translator and translator stack. This is satisfactory for my user mode simulator to emulate Linux processes on macOS.
I think Hypervisor.framework is probably the correct interface to be using if I want to avoid the kernel ABI, however that is a lot more work that making syscall wrappers and I would need to implement communication from VM process to the host process.
I think that this is probably your best choice from a binary compatibility standpoint in the long run. It is a lot of work, though I am not sure if it is really that much more work than trying to port a new libc or maintain a custom toolchain.
Yes, both of them are quite a bit of work. I need to get early boot code to switch the CPU into long mode and implement a virtual device to communicate with the host process, i.e. for console IO. Of course I need a thread implementation and a bunch of other things. It’s also quite a lot of work.
Thanks,
Michael.