Redundant Add Operation in Code Generation?

I’m curious why I am seeing this:

%uglygep18.sum = add i32 %lsr_iv8, %tmp45
%scevgep19 = getelementptr i8* %parBits_017, i32 %uglygep18_sum
%scevgep1920 = bitcast i8* %scevgep19 to i16*
%tmp78 = load i16* %scevgep1920, align 2
%uglygep14.sum = add i32 %lsr_iv8, %tmp45
%scevgep15 = getelementptr i8* %extIn_013, i32 %uglygep14_sum
%scevgep1516 = bitcast i8* %scevgep15 to i16*
%tmp79 = load i16* %scevgep1516, align 2
%conv93.i.i = sext i16 %tmp79 to i32
%uglygep.sum = add i32 %lsr_iv8, %tmp45
%scevgep11 = getelementptr i8* %sysBits_010, i32 %uglygep_sum

You can see here that “add i32 %lsr_iv8, %tmp45” is done multiple times, appearing that there are two redundant add operations that are not needed yet are generated?

Thanks.

CodeGenPrepare manipulates GEPs in a way that can expose redundancy
where it wasn't obvious before.

-Eli

Eli,

Thanks. So I’m unclear exactly which llvm opt will exhibit copy prop. behavior?

It seems to me that codegenprepare is doing a useful thing (for me, since I’m just using the llvm IR and not going to backend, providing it’s “exposing” and not simply “adding for layout for CodeGen opts” (or something similar to this)?

Thanks.

Eli,

Actually, I stil see this issue without codegenprepare being used. I’m also compiling with -o3.

So I’m still not sure why I’m seeing this issue?

Thanks.

Hmm... maybe it's LSR doing it?

Anyway, the general idea is that there are only a few passes which
perform redundant instruction elimination: GVN, EarlyCSE, and
InstCombine (IIRC).

-Eli

Eli,

Ok thanks. I was hoping -instcombine might get rid of this obvious redundancy but it did not. It’s not a large deal but inside a loop, which it is, it could be adding potentially a significant reduction in performance.

That’s LSR, as you can see from the variable names :wink: It might think that load(Base + Index) is a legal addressing mode for your target. -disable-lsr might be the right thing for you anyway.

Incidentally, MachineCSE could clean this up if it doesn’t get folded into the address, but like LSR, it tries hard not to increase register pressure.

-Andy

That solves the issue but it seems odd to me that instcombine doesn’t take care of it?

So is this just a setup for the backend? If not, seems like if there is a possibility that lsr could create these redundant operations, should it not clean itself up? Or am I mistaken?

That solves the issue but it seems odd to me that instcombine doesn’t take care of it?

LSR is part of the backend. It’s lowering the IR for a specific target. It seems to think those redundant operations are good for reducing register pressure, but doesn’t actually have much knowledge about register pressure. At this point, we won’t do any more IR level “cleanup” since that tends to undo lowering. The Machine IR passes will do some careful cleanup.

-Andy

I believe that LSR uses a "context-free" SCEV expander, which will process each fixup separately. I think that this is where the "uglygep" code comes from. If LSR was to detect redundant expressions, it would add complexity to the code. It would be better to simply run CSE as a pre-ISel pass if a cleanup before code generation is necessary. Otherwise the MI optimizations should be able to take care of this.

-K

Ok, thanks.

Even still though I would expect -instcombine (run after lsr) would do this cleanup?

Ok, thanks.

Even still though I would expect -instcombine (run after lsr) would do this cleanup?

It’s valid to run any IR pass after -loop-reduce. So you can try it. -gvn is probably what you’re looking for. It isn’t something we normally want to do.

Turn off LSR if it does bad things on your target.
-Andy

Yes, I’m running lots of passes after -loop-reduce so that’s not an issue.

I’ll try -gvn, I just thought -instcombine ( by the nature of what it is suppose to do) would handle this issue, but it does not.