I tried out the new soft-float support from the mainline.
Overall, it looks very nice and pretty clean. It is now extremely easy
to add the soft-float support for your target. Just do not call
addRegisterClass() for your FP types and they will be expanded into
But there are several minor things that would be still nice to have:
a) It is not possible to express that:
- f32 and f64 are both illegal and therefore are mapped to integers
- but only f64 is emulated on the target and there are no f32
arithmetic libcalls available (which is the case on our target)
To make it possible, f32 should be always promoted first to f64 and
then an f64 operation should be applied.
I see a small problem here with the current code, since f32 should be
promoted to the illegal type f64. It might require some special-case
handling eventually. For example, what should be the result
getTypeToTransformTo(f32)? On the one hand, it is f64. On the other
hand it is illegal.
b) LLVM currently uses hard-wired library function names for
FP libcalls (and ALL other libcalls as well). It would be nice if
they would be customizable, since some targets (e.g. some embedded
systems) have some existing naming conventions that are to be followed
and cannot be changed. For example, on our embedded target all libcalls
for soft-float and some integer operations have very target-specific
names for historical reasons.
TargetLowering class specific for a target could be extended to handle
that and this would require only minor changes.
c) LLVM libcalls currently pass their parameters on stack. But on some
embedded systems FP support routines expect parameters on specific
At the moment, SelectionDAGLegalize::ExpandLibCall() explicitly uses
CallingConv::C, but it could be made customizable by introducing a
special libcall calling convention or even better by allowing the
target specific lowering of libcalls. Actually it can be combined with
the solution for (b). In this case target-specific lowering can take
care about names of libcalls and also how they handle their parameters
and return values.
d) Would it be possible with current implementation of soft-float
support to map f32/f64 to integer types smaller than i32, e.g. to i16?
I have the impression that it is not necessarily the case, since it
would require that f64 is split into 4 parts.
This question is more about a theoretical possibility. At the moment
my embedded target supports i32 registers. But some embedded systems
are still only 16bit, which means that they would need something like
I'm wondering, how easy or difficult would it be to support such a
mapping to any integer type?
My impression is that (b) and (c) are very easy to implement, but (a)
and (d) could be more chellenging.
Evan, I guess you are the most qualified person to judge about this,
since you implemented the new soft-float support.
What do you think about these extension proposals?