Complex intrinsics proposal and roundtable

This is another proposal about introducing complex types into LLVM. Following on from, this is different in that it doesn’t propose complex types directly but instead proposes representing complex numbers as vectors and using intrinsics. See also Florian’s proposal to do this (starting with complex multiply) here:

Representation of complex types

The proposal is to represent complex numbers as vectors of 2N floating-point types. For example, <2 x float> would represent a scalar complex number. <4 x float> would represent a vector of 2 complex floating-point numbers, with the first complex number living in lanes 0 and 1, and the second living in lanes 2 and 3. This representation of complex types matches the vector form in x86.

The basic arithmetic operations are mapped as follows:

  • or -: fadd or fsub <2 x float> %a, %b

*: call <2 x float> @llvm.complex.multiply(<2 x float> %a, <2 x float> %b)

/: call <2 x float> @llvm.complex.divide(<2 x float> %a, <2 x float> %b)

Building complex values, creal, cimag: existing extractvalue, insertvalue, and shufflevector instructions as appropriate
cabs: call float @llvm.complex.abs(<2 x float> %val)
cconj: call <2 x float> @llvm.complex.conj(<2 x float> %val)

One complexity that hasn’t been covered in prior proposals is what complex multiplication actually means. Among our major source languages (C/C++/Fortran), there is some variance as to the definition of multiplication, division, and complex absolute values. This variation is most acute when looking at division. The naïve expansion of computing (a + bi)/(c + di) is

denom = c * c + d * d
real = (a * c + b * d) / denom
imag = (b * c - a * d) / denom

If you use Fortran, there is a requirement that the division operation be scaled to prevent overflow in computing denom (at the very least, this is how I’ve seen existing Fortran compilers implement it). If you use C, there is an additional requirement that the resulting complex number be recomputed to infinity for certain cases where real and imaginary are both NaN (see Annex G of the C standard). Using the CX_LIMITED_RANGE pragma, or equivalent command-line option, lifts both of these requirements. Additionally, gcc provides a -fcx-fortran-rules that lifts only the latter requirement. My understanding is that all hardware implementations of complex multiply implement CX_LIMITED_RANGE rules.

My proposal is to distinguish between these situations using a mixture of existing fast-math flags and call-site attributes. Without any flags or call-site attributes, these intrinsics would expand to their compiler-rt equivalents of __mulsc3, __divsc3, etc., which is to say they would have full C requirements (both NaN checking and scaling). The “complex-limited-range” call-site attribute would disable both of these requirements. The “complex-no-scale” call-site attribute would disable the specific scaling requirement but retain the NaN checking behavior. Additionally, fast math flags can be used to generate behavior: nnan or ninf would trigger the dropping of the NaN checking code by itself.

Implementation experience

I have been able to implement patches that pattern-match for complex multiply and divide (in the CX_LIMITED_RANGE cases) early in InstCombine, and haven’t seen issues with that. Doing codegen for the non-CX_LIMITED_RANGE case, requiring a call to __mulsc3, is difficult because that function returns a C _Complex number, and the C ABIs for complex numbers tend to be inconsistent even among different floating-point types within the same architecture. The truly evil case is the i386 ABI for float, which is returned as edx:eax (or i64, as generated by clang).

If you want to talk more about this, I have a roundtable tomorrow, Friday, at 14:45 Eastern or 11:45 Pacific.

FYI, Nick just uploaded this protytype:

Hi Joshua,

I think that using 2x elements is a much more promising direction. Question though: how much value is there in making these be target independent intrinsics? Are these actually general portable enough (across architectures) to be worth abstracting for a frontend? If the frontend has to handle all the complexity anyway (e.g. your point about multiply and divide are on point), there is little benefit to adding these.

Separately, do you plan to handle complex integers? Do you plan to support arbitrary bit width elements, and what is the legalization scheme for these?


To answer your second question first: we briefly discussed this at the complex round table last week. Complex integers has not been on anyone’s roadmap. During the discussion, it was pointed out that complex integer arithmetic tends to involve heterogeneous types—the result type of the arithmetic is not the same as its input time—which is not the case for complex floating-point types, and so handling them the same way may not make the most sense.

I’ve posted a full patch of most of the pieces I’ve implemented here: One of the goals in the path I have gone down is to enable more consistent optimization of complex types within the compiler itself, as the variety of complex representations in the ABI means that they can arrive at passes such as vectorization in an inconsistent form. I think there is value in having them in the middle-end of the optimizer, even independent of their value as representing hardware instructions, although going in as experimental for now sounds like the right way to approach stuff.

A while back, when this idea came up for the first time, I was trying to argue in favor of complex integer support (since Hexagon has instructions that do complex arithmetic on integers). I don’t understand the argument that the integer result has a different type from the inputs: the standard arithmetic in C/C++ doesn’t change result types, but overflows can produce UB. The complex arithmetic could do the same. Are you talking about having to do a full-precision arithmetic before producing the final result (to avoid these overflows on intermediate values)?