I believe there’s an ABI issue in the Complex to Libm conversion, as MLIR complex types are always lowered to LLVM structs of two elements, but this isn’t necessarily the ABI convention for every target.
For example, complex floats on x86_64 are are lowered by clang to [2 x float]
rather than {float, float}
, so if you pass a complex<f32>
from MLIR into e.g. cpow
from libm you get an ABI break currently.
I saw a discussion previously at C representation of complex types? where the conclusion was that MLIR should only have a contract with LLVM and not a platform ABI. This is fine for C calling in to MLIR as in the example there, but when lowering to a call to a C function we definitely can’t ignore the platform ABI or we will get incorrect results.
As a concrete example, the following code doesn’t lower correctly for x86_64
func.func @foo(%i:complex<f32>, %j:complex<f32>) -> (complex<f32>) {
%o = complex.pow %i, %j : complex<f32>
return %o : complex<f32>
}
Compiled with mlir-opt --convert-complex-to-libm test.mlir --convert-complex-to-llvm --convert-func-to-llvm | mlir-translate --mlir-to-llvmir -
gives
declare { float, float } @cpowf({ float, float }, { float, float })
define { float, float } @foo({ float, float } %0, { float, float } %1) !dbg !3 {
%3 = call { float, float } @cpowf({ float, float } %0, { float, float } %1), !dbg !7
ret { float, float } %3, !dbg !9
}
which is not the correct signature for cpowf, so you get an incorrect result if you call this function (even if it’s called from MLIR not from C).
I’m not sure what the solution is here, is MLIR target aware in any way? If so, the complex to libm conversion could possibly add code to change the {float, float}
to [2 x float]
before the call on x86_64?