LLVMdev Digest, Vol 77, Issue 41

Yes, the new type is simply a static object managed by Type and LLVMContext. This is only referred to by Values of fixed point type. New instructions should not interfer with existing passes, as opcodes are switched and handled and would simply continue to be unaware of the new opcodes.

My main concern is that code evolution demands following of the LLVM release series of the future. So, even if this got done locally, it would be a hassle to deal with if future releases did not include it at all. So, even if only a small part of it got committed, like say all parts that do not concern or disturb the optimizers - like the type and instructions themselves, this would be better then nothing at all. If then some other party wanted to contribute, this would not lead to enormous merge aches.

Also, the DSPTargetMachine class would not interfer at all with the LLVMTargetMachine. Personally, I think LLVM could become a leading DSP choice with these two features, but tha! t’s just me, or? :slight_smile:

regards,

/Jonas

Date: Fri, 26 Nov 2010 18:08:53 +0200
From: edwintorok@gmail.com
To: jnspaulsson@hotmail.com
CC: llvmdev@cs.uiuc.edu
Subject: Re: [LLVMdev] LLVMdev Digest, Vol 77, Issue 41

Hi Jonas,

I am investigating the possibilities of incorporating fixed
point support into the LLVM I/R.

I think you should write a rationale explaining why you want to
introduce new types etc rather than using the existing integer
! > types, with intrinsic functions for the operations, or some other
such scheme. Introducing new types is hard work and creates a
maintenance burden for everyone, since they will need to be
properly supported by all parts of the compiler forever more. It
is therefore important to give a cogent explanation of why this
is the best approach, why the benefits outweigh the costs, and so
forth.

Also can’t fixed point be handled entirely by the frontend?
You store the scaling factor as an attribute on the type.

When you perform operations that involve the same fixed point types
you can perform them with integers, and when you need to perform
conversions you emit the appropriate code to perform the
! ; conversions. The emitted LLVM IR needs to know nothing about the
&! gt; > > scaling factors involved.

<retitling to be useful>

LLVM shouldn't have a fixed point type class. You should just use standard integer types. Supporting fixed point and saturation should by done by adding new operations to llvm IR. If you're interested in this, I'd suggest starting by implementing these as intrinsics. If it makes sense over time we can change them to primitive instructions if there is an advantage to doing so.

-Chris

Hi,

all right, no fixed point type in LLVM :frowning:

May I ask then, what could one expect from various optimizations when using intrinsics to support the fixed point type? LTO, Value optimizations, mem ??

Are you saying it is feasible to add intrinsics and some extra optimizers for these, then?

Best regards,

Jonas

all right, no fixed point type in LLVM :frowning:

May I ask then, what could one expect from various optimizations when using
intrinsics to support the fixed point type? LTO, Value optimizations, mem ??

You'd have to implement explicit support for the new intrinsics in
various places. For value optimization, I imagine you'll want to add
support to both lib/Analysis/ConstantFolding.cpp (for when all
arguments are constants) and
lib/Transforms/InstCombine/InstCombineCalls.cpp (for when at least one
isn't).

LTO support would be automatic since I can't really imagine
-instcombine not running during LTO (unless perhaps inlining is
disabled, in which case it probably won't matter anyway), and that's
just one of the passes that try to constant fold instructions
(including intrinsics calls).

One "obvious" optimization to add to -instcombine would be to
substitute regular integer operations when it's safe: when there's
provably no overflow that might need to saturate (don't forget to add
nsw/nuw in this case), and no problems regarding whatever else, if
anything, makes these different from "plain old ints".

The backends would also need support of course, because presumably you
can't *always* simplify them away :).

I'm not quite sure what you mean by "mem", but if they're marked
appropriately all the optimizers that care will know they don't access
memory (if that's what you meant). This should also allow passes like
GVN to handle them automatically.

Eventually, you may also want to make some of the analyses and some
other specific transformation passes aware of the semantics of the
intrinsics.

Are you saying it is feasible to add intrinsics and some extra optimizers
for these, then?

Should be, as long as backend support isn't a problem. And that's a
problem you'd have whether they're designed as intrinsics taking ints
or as new instructions and/or types.

You probably won't even need new optimization passes; just add some
switch cases to the ones that are already there.

Of course, you shouldn't go overboard with the intrinsics; for
example, I imagine that fixed-point types can just use 'icmp' for
comparisons since they're really just scaled integers. So only add the
ones you actually need, if only because it's less work both when
implementing them and when updating the optimizers to support them.

Can you not just lower your fixed-point operations to widen, perform
normal integer operation, shift and truncate? With LLVM's support for
arbitrary-width integer types, it might work surprisingly nicely. For
instance, a 8.24 multiply would be:
- widen the i32s to i56s
- multiply
- shift right 24
- truncate to i32

Then you'd get working code free from LLVM's type legalizer and
friends, and it would just be up to the backends to recognize the
possibilities for doing things smarter, if they have relevant
instructions. (Just like it is for rotation, etc.) Optimizations
would see normal operations they already know how to simplify and
fold. And anything special -- like saturating -- would fall out from
the normal integer operations.

Hoping there isn't some really obvious reason that would fail,
~ Scott

Hi,

thanks a lot for the answer.

By mem, I meant optimizations that involves load/store intrinsics, eg llvm.fixPload(). What would the consequences of this be?

I ask then, is there any interest at all in the LLVM community for fixed point support in the future? Are there even any local successful projects that you know of?

Did you mean that fixed point support in terms of intrinsics and code extensions could become part of the main line?

Regards,

Jonas Paulsson

May I ask then, what could one expect from various optimizations when using
intrinsics to support the fixed point type? LTO, Value optimizations, mem ??

Can you not just lower your fixed-point operations to widen, perform
normal integer operation, shift and truncate? With LLVM's support for
arbitrary-width integer types, it might work surprisingly nicely. For
instance, a 8.24 multiply would be:
- widen the i32s to i56s
- multiply
- shift right 24
- truncate to i32

Wouldn't the result of 8.24 * 8.24 be 16.48, requiring widening to i64
instead of i56?

Then you'd get working code free from LLVM's type legalizer and
friends, and it would just be up to the backends to recognize the
possibilities for doing things smarter, if they have relevant
instructions. (Just like it is for rotation, etc.) Optimizations
would see normal operations they already know how to simplify and
fold. And anything special -- like saturating -- would fall out from
the normal integer operations.

I think saturating might warrant intrinsics, but above transform would
be one of the optimizations I suggested -instcombine might do if it's
provable no saturation can occur.

And yes, I'd definitely suggest pattern-matching the pure-integer
versions in a backend for a target that supports the fixed-point ones
natively if they're more efficient than the expanded versions (i.e.
even if saturation isn't needed). It'd also help catch integer
operations with similar patterns.

By mem, I meant optimizations that involves load/store intrinsics, eg
llvm.fixPload(). What would the consequences of this be?

Since the proposed intrinsics would operate on regular integer types,
why can't you just use regular load and store instructions?

Did you mean that fixed point support in terms of intrinsics and code
extensions could become part of the main line?

If someone implements the intrinsics and they are useful on at least
one target that main line supports, I don't see why those patches
would be rejected. Especially since Chris seems to be on board :).

Wouldn't the result of 8.24 * 8.24 be 16.48, requiring widening to i64
instead of i56?

It could be, but since i32 * i32 in LLVM gives an i32, not an i64, I
thought this was more consistent, as the "i8.24" * "i8.24" would be an
"i8.24".

It also means that if the multiplication saturates, it gives the right
answer -- if you widened to i64, did the saturating multiplication,
and truncated, the fixed-point multiply wouldn't have been saturated.

I think saturating might warrant intrinsics, but above transform would
be one of the optimizations I suggested -instcombine might do if it's
provable no saturation can occur.

I was thinking of saturating as a flag on the instruction, like the
current nsw and nuw. Whether they should be that, intrinsics, or new
instructions is a decision for people with far more LLVM experience
than mine, though.