I am currently working on a backend for the TriCore architecture.
Unfortunately, I have hit an issue with LLVM's internal representation
that's giving me a bit of a headache.
The problem is that LLVM assumes that a pointer is equivalent to a
machine-word sized integer. This implies that all pointer arithmetic
takes place in the CPU's general-purpose registers and is done with the
"regular" integer instructions.
Unfortunately, this does not hold true for the TriCore architecture,
which strictly differentiates between "normal" integer values and
pointer values. The register set is split into two subsets: 16
general-purpose registers %d0..%d15 for 32-bit integers and floats, and
16 address registers %a0..%a15 for 32-bit pointers, with separate
instructions. Moreover, the ABI requires that pointer arguments to (and
pointer results from) functions be passed in address registers instead
of general-purpose registers.
As LLVM internally converts all pointers to integers (in my case i32),
there is no way for a backend to tell whether an i32 operand is really
an integer or actually a pointer. Thus neither the instruction selection
nor the CallingConvention stuff works for me as expected.
It does not seem possible to solve this problem without modifying at
least some of the original LLVM source code. So what would be the
easiest (and least invasive) way to achieve this?
I have thought about adding a new ValueType (say, "p32") and overriding
TargetLowering::getPointerTy() to return that new type instead of i32.
Of course, this would probably be more of a dirty hack than an actual
solution, but hopefully would do the trick - provided I'm not missing
Comments and suggestions are highly welcome.
Thank you for your time!