I'm working on a project to generate native code for a domain specific
langauge where the user defines functions in the complex plane. This
implies that I need to support complex numbers as a datatype with
LLVM. Its fairly straightforward to create a struct of two floats (or
doubles, etc.) and do the simple operations like add, subtract,
multiply, divide, etc.
However, things get stickier when we get to the trig functions. At
that point, I'd rather defer to the trig functions implemented in C++,
possibly taking them from boost's TR1 support instead of my compiler's
TR1 support. At any rate, these functions aren't like the cos
function in the math library because they are possibly template
expansions from a header file.
This leaves me with two basic types of questions:
1. Is there a "best" representation for complex numbers in LLVM?
Is my simple "struct of two floats" idea sufficient?
Assuming you're talking about how to represent complex values internally within your functions, you have three reasonable options:
A) You can pass the components around as a first-class aggregate containing two floats.
B) You can pass the components around as a vector of two floats.
C) You can pass the components around as two separate values.
Working with a single value instead of 2, i.e. using A or B, has a lot of elegance advantages. The disadvantage is that since most operations on complex values aren't just component-wise, you'll probably find yourself doing a lot of decompositions and recompositions. Of course, those operations will generally be optimized away, but they can cost you compile time.
Another thing to consider is that LLVM doesn't have an autovectorizer right now, so if you want to make sure that adds and subtracts are done as vector operations, you'll probably need to emit them as such in the frontend.
Is there a way to maintain binary compatability with
std::complex<float> or std::complex<double> so that those types can
be used directly when calling LLVM generated native code?
The psABI memory layout for _Complex blah is a struct of two blah on every platform I know of. The parameter-passing and return-value scheme is much more complicated; you basically have to examine the IR output of a C/C++ frontend (e.g. clang -S -emit-llvm) and do whatever it does, keeping in mind that it can vary from platform to platform.
2. What's the best way to expose the C++ complex trig, etc., functions?
Is there a way I can use LLVM IR to write the type signature of
For the correct IR type signature, again, you'll need to run them a C++ frontend on every platform you're interested in. That will also tell you the right mangling to use.
Alternatively, if you're running as a JIT, you can ignore the mangling problem by emitting calls to notional functions (e.g. @complexFloatSin) and then provide JIT mappings for those functions to the address of the appropriate C++ library functions.