I’m recently trying to implement the model attribute for LoongArch targets (GCC doc, implementation). I’ve since found it’s a bit trickier than I originally thought, so I thought I’d best ask here.
For some background: LoongArch (and probably other arches as well) sometimes wants to enforce another code model for accessing certain variables than the one globally configured, for more-or-less “edge” cases like the percpu arrangement in Linux/LoongArch modules. This is because the values may be placed in specially located sections that prevent the default (smaller) code model from working, and it’s undesirable to have to compile the entire project with the larger code model because performance will mildly suffer for the vast majority of accesses.
In my attempt at implementing the attribute, I found the LLVM part arguably easier to implement: I defined special address spaces that are interpreted as “code model override” at codegen time (patch pending because I’m not sure it’s the best way forward). However, I can’t simply leverage the existing address_space attribute: pointer casts to e.g. normal void * are forbidden, which is appropriate for truly heterogeneous cases but not for this. And when I tried to wire up the attribute I found the infra seems to be absent for recording individual code model or address space overrides (in order to insert the appropriate addrspacecast’s at LLVM IR emission time).
So, is this particular semantic feasible to implement within the current LLVM/Clang architecture? And in case it’s doable, how am I supposed to proceed that’s best from a long-term perspective? Many thanks in advance!
I don’t think address spaces are the right way to model this. Code models are purely at the machine code level and shouldn’t affect middle-end optimizations.
Some people have been looking into mixing object files built with different code models and making that work with LTO would be nice. And separately it’d be nice if e.g. we’re building with the medium code model where accesses to some global go through the more expensive instruction sequence but we know that accesses to a specific global are performance sensitive so we want to opt out and use the small code model instruction sequence while laying out that global close to text. So some representation that works with manual frontend specification and also mixed code models would be ideal.
According to your linked GCC doc, the code model names are normal and extreme? How do those map to LLVM’s small/medium/large code models? LoongArch seems to implement small/medium/large in LLVM.
This should probably be a per-global attribute in LLVM IR, similar to the existing dso_local attribute or section specifier. Going the section route seems right to me, e.g. @g = global i32 0, section "foo", code_model "large". Backends will need to be updated to respect this.
Then you can mirror whatever clang does to map section attributes from the source code to the IR.
Wouldn’t this essentially be based on the triple used on the command line?
For example, my OpenVMS compilers using a hybrid code model. It is close to “medium” but always go thru the GOT for static data even in the same module AND always go thru the GOT to call a routine even in the same module (might be in a different section). And we also require a frame-pointer (there is already a control knob for that).
Thanks for your suggestion! It seems extending the LLVM IR could be the way to go (although I’d wait for more opinions before getting my hands dirty again), and it’s reassuring to see someone else also having a similar use case
How does this work in your case? Because in 64-bit LoongArch’s use case, the code model is decoupled with the target tuple which only encodes the ABI, meaning you can freely combine any of the 3 code models with any of the 3 defined ABIs, with the -mcmodel= flag. We don’t have the freedom to put the code model specification also into the target tuple.
Also, the crux is that we have to allow certain (global) values to be accessed differently than the globally specified code model, so in any case a code model in the target tuple wouldn’t help.