Hi!
I compile a subset of c++ to llvm IR that is then executed by a
kind of virtual machine that is not capable of executing all instructions.
For example if I have a struct with two floats and copy it form
one location to another clang uses a bitcast to i64 to copy it
(x86 target).
Can I implement an own TargetInfo to prevent this?
Of course I need an own TargetInfo to describe my virtual machine
anyway, but the question is how I prevent the generation of
bitcast ops.
-Jochen
Hi!
I compile a subset of c++ to llvm IR that is then executed by a
kind of virtual machine that is not capable of executing all instructions.
For example if I have a struct with two floats and copy it form
one location to another clang uses a bitcast to i64 to copy it
(x86 target).
You will never be capable of eliminating all bitcasts, but...
Can I implement an own TargetInfo to prevent this?
Yes, in this case it sounds like it is being generated from the X86-specific ABI handling code. Implementing your own TargetInfo for ABI lowering will allow you to avoid this specific bitcast.
-Chris
Hi!
Now I have implemented my TargetInfo which derives from clang::TargetInfo
to define type widths and alignment, but there is no ABI lowering.
clang still generates (in unoptimized code) a llvm.memcpy.p0i8.p0i8.i32
to copy my struct. Is this hard coded or are there other classes I have
to override?
By the way in TargetInfo.h there are lines like
unsigned getBoolWidth(bool isWide = false) const { return 8; } // FIXME
unsigned getBoolAlign(bool isWide = false) const { return 8; } // FIXME
Later on I'd like to define bool as 1 bit, therefore would it be possible to fix this?
-Jochen
No. These are memory sizes. Memory is always in units of bytes.
-Chris
I don't think so, but try it out and see?
-Chris