Rotates, once again

Hi everyone!

I recently ran into some interesting issues with generation of rotate instructions - the details are in the bug tracker (https://bugs.llvm.org/show_bug.cgi?id=37387 and related bugs) for those interested - and it brought up the issue of rotates in the IR again. Now this is a proposal that has been made (and been rejected) several times, but I've been told that this time round we ran into issues not previously considered, and that I should bring the topic up on llvm-dev.

I'll try to summarize the status quo first and then bring up the tricky cases that might tip the balance in favor of rotate intrinsics.

Rotates

It might be worth generalizing this to an arbitrary bit-select.

My application needs to extract contiguous bits of multi-precision numbers. e.g. given a 128-bit number stored in an array a[2] and we want to select 64 bits starting from position N:

result = (a[1] << (64-N)) | (a[0] >> N);

which is basically the same as your rotate if a[1] == a[0]. Some machines (e.g. x86 and PA-RISC) have this instruction in hardware. On x86 this is a double-edged sword, as SHLD/SHRD may be microcoded and it may be faster just to expand to 2 shifts and an OR. Even so, it might help if I could encode this more directly.

Thanks for writing this up. I’d like to have this intrinsic too.

Another argument for having the intrinsic is shown in PR37426:

https://bugs.llvm.org/show_bug.cgi?id=37426

Vectorization goes overboard because the throughput cost model used by the vectorizers doesn’t match the 6 IR instructions that correspond to 1 x86 rotate instruction. Instead, we have:

$ opt 37426prevectorize.ll -S -cost-model -analyze

Cost Model: Found an estimated cost of 1 for instruction: %and = and i32 %cond, 31
Cost Model: Found an estimated cost of 1 for instruction: %shl = shl i32 %1, %and
Cost Model: Found an estimated cost of 1 for instruction: %sub = sub nsw i32 0, %cond
Cost Model: Found an estimated cost of 1 for instruction: %and5 = and i32 %sub, 31
Cost Model: Found an estimated cost of 1 for instruction: %shr = lshr i32 %1, %and5
Cost Model: Found an estimated cost of 1 for instruction: %or = or i32 %shl, %shr

The broken cost model also affects unrolling and inlining. Size costs are overestimated for a target that has a rotate instruction.
This cost problem isn’t limited to rotate patterns (it’s come up in the context of min/max/abs/fma too). But it would be simpler if we had a rotate intrinsic, and the 6-to-1 margin is the biggest I’ve seen.

Another potential benefit of a generic intrinsic is that it can replace target-specific intrinsics. PowerPC and x86 have those. ARM translates source-level target-specific intrinsics into regular IR, so that could use the intrinsic too.

More food for thought:

  1. Like bitwise rotate, Intel’s bitwise PDEP and PEXT instructions are expressible via mutliple pure C bitwise operations. Nevertheless, clang/LLVM has intrinsics for these bitwise instructions.
  2. If we imagine an alternate universe where C had rotate operators from the beginning, then LLVM would probably have rotate intrinsics too, and we’d be discussing whether the LLVM rotate intrinsics were worth keeping or not given that clang could just emit a series of simpler bitwise operations. I’d imagine the examples below would be why LLVM would keep the rotate intrinsics (in this alternate universe).

Dave

I'd vote for the "valign" variant that David proposed. It becomes a rotate when both inputs are the same.

<ty> %result = @llvm.valign.<ty>(<ty> %a0, <ty> %a1, i32 s)
result = truncate (concat(a1,a0) >> s)

Where "concat" concatenates the bits of both inputs to form a value of a twice as long type, and "truncate" takes the lower half of the bits of its input.

-Krzysztof

Given that this is a general problem that occurs with other instruction sequences as well, wouldn't it make more sense to make the cost model handle more than one instruction, as suggested in PR31274 [1]?

[1] https://bugs.llvm.org/show_bug.cgi?id=31274

In all these cases (rotate, min, max, abs, fma, add-with-overflow, and probably many more) there's a tradeoff between elaborating them as simpler IR instructions and modelling them as its own instruction / intrinsic. A big disadvantage of introducing new instructions / intrinsics is that all optimizations have to be told about this, increasing the compiler code base and maintainability burden. On the other hand, too few instructions can make optimization difficult as well (in theory, one instruction like "subtract and branch if not equal to zero" could emulate all the others, but this wouldn't be very helpful for optimization). Since you put a lot of thought into how to canonicalize IR, can you eleborate more on this tradeoff? Can we find a set of criteria to determine whether an instruction pattern should get an intrinsic or not?

-Manuel

Vectorization goes overboard because the throughput cost model used by the
vectorizers doesn't match the 6 IR instructions that correspond to 1 x86
rotate instruction. Instead, we have:

[...]

The broken cost model also affects unrolling and inlining. Size costs are
overestimated for a target that has a rotate instruction.
This cost problem isn't limited to rotate patterns (it's come up in the
context of min/max/abs/fma too). But it would be simpler if we had a
rotate
intrinsic, and the 6-to-1 margin is the biggest I've seen.

Given that this is a general problem that occurs with other instruction
sequences as well, wouldn't it make more sense to make the cost model
handle more than one instruction, as suggested in PR31274 [1]?

[1] https://bugs.llvm.org/show_bug.cgi?id=31274

Yes, I think everyone agrees that we need to improve the cost model
interface. Note that there's a proposal for review for the 1 IR -> many
machine ops variant of this question:
https://reviews.llvm.org/D46276

In all these cases (rotate, min, max, abs, fma, add-with-overflow, and
probably many more) there's a tradeoff between elaborating them as simpler
IR instructions and modelling them as its own instruction / intrinsic. A
big disadvantage of introducing new instructions / intrinsics is that all
optimizations have to be told about this, increasing the compiler code base
and maintainability burden. On the other hand, too few instructions can
make optimization difficult as well (in theory, one instruction like
"subtract and branch if not equal to zero" could emulate all the others,
but this wouldn't be very helpful for optimization). Since you put a lot
of thought into how to canonicalize IR, can you eleborate more on this
tradeoff? Can we find a set of criteria to determine whether an
instruction pattern should get an intrinsic or not?

I don't have formal criteria for this. Someone more knowledgeable may give
us an answer.

An informal metric might be: if the operation is supported as a primitive
op or built-in in source languages and it is supported as a single target
instruction, can we guarantee that 1-to-1 translation through optimization?
We are failing that test in the loop-invariant C example presented here.
We're also failing that test when operands have different sizes. Rotate
goes in, logic and shifts come out.
We even failed that test for an even more basic case before this fix:
https://reviews.llvm.org/D46656
We may still be failing the basic case for other source languages/flavors
because encoding this operation in existing IR ops isn't obvious?

Another informal measure is: how do programmers express this operation
without a builtin? If they're resorting to inline asm, that's a strong sign
that the compiler needs an intrinsic.

Another metric could be: what is the probability that the optimizer can
improve the codegen if the operation is broken down into multiple IR
instructions?
As noted here, it's unlikely for rotate. If it is possible, then adding
folds to instcombine for this intrinsic isn't hard. Are any other passes
affected?

For reference, these are the current target-independent bit-manipulation
intrinsics - bswap, bitreverse, ctpop, ctlz, cttz:
http://llvm.org/docs/LangRef.html#bit-manipulation-intrinsics

The LLVM cost for the proposed rotate intrinsic should be in the same range
as those? Note that we would not just be adding code to support an
intrinsic. There are already ~200 lines of DAG matching code for rotate, so
we already have a cost for trying to get this optimized. We can remove that
after adding canonicalization to the intrinsic in IR.

It seems perfectly reasonable for LLVM users to expect this to happen reliably.

I'd like to take a look at the other side of the equation: the cost of adding a new intrinsic in terms of teaching passes to see through it, so we don't lose optimizations that worked before the intrinsic was added.

For example, clearly ValueTracking needs a few lines added so that computeKnownBits and friends don't get stopped by a rotate. Anyone have a reasonably complete list of files that need similar changes?

John

What do we want do with masking of the amount on this proposed intrinsic. Do we need an explicit AND to keep it in bounds? X86 can delete the AND during isel since the hardware is well behaved for out of range values. Hardware only masks to 5-bits for 8/16 bit rotates for the purpose of flags, but the data will be modulo the bit width. Since we don’t use the flags from rotates we can remove the mask. But if the mask is explicit in IR, then LICM might hoist it and isel won’t see it to remove it.

A rotate intrinsic should be relatively close in cost/complexity to the existing bswap.

A grep of intrinsic::bswap says we’d probably add code in:

InstCombine

InstructionSimplify

ConstantFolding

DemandedBits

ValueTracking

VectorUtils

SelectionDAGBuilder

But I don’t think it’s fair to view those additions as pure added cost. As an example, consider that we have to add hacks to EarlyCSE to recognize multi-IR-instruction min/max/abs patterns. Intrinsics just work as-is there. So if you search for ‘matchSelectPattern’, you get an idea (I see 32 hits in 10 files) of the cost of not having intrinsics for those operations that we’ve decided are not worthy of intrinsics.

Thanks Sanjay!

At this point the cost/benefit tradeoff for rotate intrinsics seems pretty good.

John

I’m guessing nobody has started implementing any of the suggested rotate functionality since there are still open questions, but let me know if I’m wrong.

We’re still getting patches that try to work around the current limitations ( https://reviews.llvm.org/D48705 ), so we should move forward since we’ve approximated/justified the cost and benefits.

The scalar rotate moves bits and such an operation doesn't make much sense for moving data across lanes in vectors. I voted for the valign variant originally, but I think that a per-element rotate would be the natural vector version of the scalar operation.

It could still be doing the "double shift", since it's more general, it just shouldn't be called valign. A per-byte cross-lane vector rotate (an actual vector valign) can be implemented using shuffle, so I don't think that an intrinsic for that is necessary.

-Krzysztof

Agreed. The per-element definition is the correct one.

– Steve

1. I'm not sure what you mean by "full vector" here - using the same shift distance for all lanes (as opposed to per-lane distances), or doing a treat-the-vector-as-bag-of-bits shift that doesn't have any internal lane boundaries? If the latter, that doesn't really help you much with implementing a per-lane rotate.

I think the most useful generalization of a vector funnel shift in this context is lane-wise

result\[i\] = trunc\(concat\(a\[i\], b\[i\]\) &gt;&gt; c\[i\]\)

(or the equivalent for a left shift); the special case a==b is a rotate.

2. For operand sizes that have native rotate instructions, at least x86, x86-64, ARM A32/T32 and AArch64 A64 agree that rotate distances are modulo the operand width. I believe PPC and MIPS do the same but am not sure (it's been a while), no clue about other architectures.

It certainly seems the most natural way to define it, since rotates are cyclic to begin with.

8- and 16-bit rotates will need to be lowered into multi-instruction sequences on most targets (x86 and x86-64 can do them directly, but RISC-lineage archs usually don't have rotates at smaller than word size). Having explicit modulo semantics might end up forcing an explicit extra AND there, so that's an extra cost there, but it would certainly be nice to have the rotate definition be total.

-Fabian

I also agree that the per-element rotate for vectors is what we want for this intrinsic.

So I have this so far:

declare i32 @llvm.catshift.i32(i32 %a, i32 %b, i32 %shift_amount)
declare <2 x i32> @llvm.catshift.v2i32(<2 x i32> %a, <2 x i32> %b, <2 x i32> %shift_amount)

For scalars, @llvm.catshift concatenates %a and %b, shifts the concatenated value right by the number of bits specified by %shift_amount modulo the bit-width, and truncates to the original bit-width.

For vectors, that operation occurs for each element of the vector:
result[i] = trunc(concat(a[i], b[i]) >> c[i])

If %a == %b, this is equivalent to a bitwise rotate right. Rotate left may be implemented by subtracting the shift amount from the bit-width of the scalar type or vector element type.

Or just negating, iff the shift amount is defined to be modulo and the machine is two's complement.

I'm a bit worried that while modulo is the Obviously Right Thing for rotates, the situation is less clear for general funnel shifts.

I looked over some of the ISAs I have docs at hand for:

- x86 (32b/64b variants) has SHRD/SHLD, so both right and left variants. Count is modulo (mod 32 for 32b instruction variants, mod 64 for 64b instruction variants). As of BMI2, we also get RORX (non-flag-setting ROR) but no ROLX.

- ARM AArch64 has EXTR, which is a right funnel shift, but shift distances must be literal constants. EXTR with both source registers equal disassembles as ROR and is often special-cased in implementations. (EXTR with source 1 != source 2 often has an extra cycle of latency). There is RORV which is right rotate by a variable (register) amount; there is no EXTRV.

- NVPTX has SHF (https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#logic-and-shift-instructions-shf) with both left/right shift variants and with both "clamp" (clamps shift count at 32) and "wrap" (shift count taken mod 32) modes.

- GCN has v_alignbit_b32 which is a right funnel shift, and it seems to be defined to take shift distances mod 32.

based on that sampling, modulo behavior seems like a good choice for a generic IR instruction, and if you're going to pick one direction, right shifts are the one to use. Not sure about other ISAs.

-Fabian

Sorry for the late reply to this thread, I'd like to mention that the existing ISD ROTL/ROTR opcodes currently do not properly assume modulo behaviour so that definition would need to be tidied up and made explicit; the recent legalization code might need fixing as well. Are you intending to add CONCATSHL/CONCATSRL ISD opcodes as well?

Additionally the custom SSE lowering that I added doesn't assume modulo (although I think the vXi8 lowering might work already), and it only lowers for ROTL at the moment (mainly due to a legacy of how the XOP instructions work), but adding ROTR support shouldn't be difficult.

Yes, if we’re going to define this as the more general 2-operand funnel shift, then we might as well add the matching DAG defs and adjust the existing ROTL/ROTR defs to match the modulo.

I hadn’t heard the “funnel shift” terminology before, but let’s go with that because it’s more descriptive/accurate than concat+shift.

We will need to define both left and right variants. Otherwise, we risk losing the negation/subtraction of the shift amount via other transforms and defeat the point of defining the full operation.

A few more examples to add to Fabian’s:

  • x86 AVX512 added vprol* / vpror* instructions for 32/64-bit element vector types with constant and variable rotate amounts. The “count operand modulo the data size (32 or 64) is used”.

  • PowerPC defined scalar rotates with ‘rl*’ (everything is based on rotating left). Similarly, Altivec only has ‘vrl*’ instructions for vectors and all ops rotate modulo the element size. The funnel op is called “vsldoi”. So again, it goes left.

Initial patch proposal:
https://reviews.llvm.org/D49242

I tried to add anyone that replied to this thread as a potential reviewer. Please add yourself if I missed you.