First off, let me just get out of the way that I’m new to both Rust and LLVM. I’m a seasoned C programmer (since the 1980s), somewhat regular O/S contributor and doing my best to solve this myself, but there’s just too many moving parts for me to simultaneously grok.
This is all relative to Rust’s fork of LLVM.
I’m trying to debug some weird behavior cross compiling some Rust code for mipsel. I’ve got Rust’s instrument-coverage option turned on, which as best I can tell is just using LLVM’s instrprof and somehow or another the object files end up with calls to __sync_fetch_and_add_8, which mips32 can’t do.
What I think is happening:
Rust is internally invoking codegen and getting a call to instrprof.increment intrinsic. It’s also conceivable that this call happens at tool-chain build time and Rust is just stuffing in templated entries.
Something then optimizes and/or “lowers” that call down to this IR (retrieved from Rust via --emit=llvm-ir):
%0 = atomicrmw add ptr @__profc__RNvCs7qINHSN7ijU_4test4main, i64 1 monotonic, align 8, !dbg !326
Which is ultimately converted (I believe by LLVM) to the __sync_fetch_and_add_8 call that doesn’t exist.
From reading about LLVM atomics, I don’t think the atomicrmw instruction should end up in the IR, since the mips target doesn’t support atomics that long.
Looking at Target/MIPS, I don’t see a call to setMaxAtomicSizeInBitsSupported. So, my read of the situation is that it should remain as 0 and thus block all atomic instructions? That doesn’t seem to happen though.
Further, I don’t think even given that atomicrmw instruction that LLVM should be outputting a __sync_fetch_and_add_8 for it on MIPS.
Can anybody suggest a starting point for further exploration?