GEP index canonicalization

Hi,

InstCombine canonicalizes index operands (unless they are into struct types) to pointer size. The comment says: "If we are using a wider index than needed for this platform, shrink it to what we need. If narrower, sign-extend it to what we need. This explicit cast can make subsequent optimizations more obvious.".

For our architecture, the canonicalization is a bit problematic. For example, our load operation can take any width and will implicitly sign-extend or truncate it. The extra explicit cast however will show up as an extra operation in the machine code. It is of course easy to eliminate the cast in a peephole optimization in the backend.

More interesting is the effect of this canonicalization on subsequent transformations. Which optimizations are more obvious now, as the comment says? I found examples where it enables IndVarSimplify to promote an index variable to pointer size. However that introduces truncations, which can't be optimized away by simple peephole optimizations in the backend anymore.

Does it make sense to add a target hook for this?

-Manuel

Example:

define void @foo(i32 %n, i32* %a) {
entry:
   %cmp1 = icmp slt i32 0, %n
   br i1 %cmp1, label %for.body, label %for.end

for.body: ; preds = %for.body, %entry
   %i = phi i32 [ %inc, %for.body ], [ 0, %entry ]
   %ptr = getelementptr inbounds i32, i32* %a, i32 %i
   store i32 %i, i32* %ptr, align 4
   %inc = add nsw i32 %i, 1
   %cmp = icmp slt i32 %inc, %n
   br i1 %cmp, label %for.body, label %for.end

for.end: ; preds = %for.body, %entry
   ret void
}

InstCombine introduces a sext instruction:

define void @foo(i32 %n, i32* %a) {
entry:
   %cmp1 = icmp sgt i32 %n, 0
   br i1 %cmp1, label %for.body, label %for.end

for.body: ; preds = %for.body, %entry
   %i = phi i32 [ %inc, %for.body ], [ 0, %entry ]
   %0 = sext i32 %i to i64
   %ptr = getelementptr inbounds i32, i32* %a, i64 %0
   store i32 %i, i32* %ptr, align 4
   %inc = add nsw i32 %i, 1
   %cmp = icmp slt i32 %inc, %n
   br i1 %cmp, label %for.body, label %for.end

for.end: ; preds = %for.body, %entry
   ret void
}

IndVarSimplify promotes %i to i64, requiring two additional truncs:

define void @foo(i32 %n, i32* %a) {
entry:
   %cmp1 = icmp sgt i32 %n, 0
   br i1 %cmp1, label %for.body.preheader, label %for.end

for.body.preheader: ; preds = %entry
   br label %for.body

for.body: ; preds = %for.body.preheader, %for.body
   %indvars.iv = phi i64 [ 0, %for.body.preheader ], [ %indvars.iv.next, %for.body ]
   %ptr = getelementptr inbounds i32, i32* %a, i64 %indvars.iv
   %0 = trunc i64 %indvars.iv to i32
   store i32 %0, i32* %ptr, align 4
   %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1
   %lftr.wideiv = trunc i64 %indvars.iv.next to i32
   %exitcond = icmp ne i32 %lftr.wideiv, %n
   br i1 %exitcond, label %for.body, label %for.end.loopexit

for.end.loopexit: ; preds = %for.body
   br label %for.end

for.end: ; preds = %for.end.loopexit, %entry
   ret void
}

We have much the same problem; LLVM likes to "canonicalize" things to i64 given that we have 64-bit pointers, but we only have 32-bit arithmetic (nor do the addressing modes accept 64-bit offsets), so this is rarely actually a good idea.

—escha

*ping*

I'm pretty sure I was the last person to touch this code, so let me try to summarize my recollection of the situation.

Originally, this logic only lived in SelectionDAG. As a result, we'd have GEPs with non-pointer sized indices flowing through the rest of the optimizer.

IIRC, the specific problem which motivated the InstCombine rules came up in IndVarSimplify. Having an instruction which implicitly widens or shrinks it's operand isn't really modelled. It's not a correctness problem, but was leading to poor optimization/canonicalization decisions. There was also a concern that many other places in the optimizer had the same systemic bias. In general, having "one true way" to represent sign extension and truncation seemed likely to lead to better overall results.

Hi,

InstCombine canonicalizes index operands (unless they are into struct types) to pointer size. The comment says: "If we are using a wider index than needed for this platform, shrink it to what we need. If narrower, sign-extend it to what we need. This explicit cast can make subsequent optimizations more obvious.".

For our architecture, the canonicalization is a bit problematic. For example, our load operation can take any width and will implicitly sign-extend or truncate it. The extra explicit cast however will show up as an extra operation in the machine code. It is of course easy to eliminate the cast in a peephole optimization in the backend.

Writing the peephole or isel seems like the obvious solution here. Is there a reason that's not working out? (Beyond the one discussed below?)

More interesting is the effect of this canonicalization on subsequent transformations. Which optimizations are more obvious now, as the comment says? I found examples where it enables IndVarSimplify to promote an index variable to pointer size. However that introduces truncations, which can't be optimized away by simple peephole optimizations in the backend anymore.

At least in the example you gave below, it's looking like IndVarSimplify widened something which wasn't profitable to widen. That's separate from how we represent some of the sign extensions.

I think there are a number of ways that integers with undesired length result in generating redundant instructions. For a PowerPC example see this PR https://llvm.org/bugs/show_bug.cgi?id=25581 (See comment 3. The problem is more general than originally reported in the PR. The fix is limited to the reported problem, not the general one). Maybe we need a transformation, somewhere shortly before ISEL, that looks at integers and make final decision about their widths.

Particularly interesting candidates for such a transformation will be GEP indices. Also, it definitely requires target-specific information. The algorithm for such a pass should have some strict complexity (running time) guarantees.

Any thoughts?