How to prevent llvm's default optimization

I have an instruction pattern like

%2 = add i32 %0, 25
%3 = mul i32 %2, 525

and llvm will optimize it to
%2 = mul i32 %0, 525
%3 = add i32 %2, 12525

how to prevent it?

Hi, James,

Thanks for your reply.

I do not think it is always true, that “mul then add” is faster than “add then mul”.

For example,

A small immediate can be directly encoded in the instruction, but it becomes a larger one after a multiplication, which has to be loaded from the constant pool (extra memory access).

So I wonder, is it possile to prevent it, via changes to the config of the base class TargetLowering,than writing special custom C++ code.


This is likely more of a canonicalization than an optimization. This is done so that if you got the add followed by mul input or the mul followed by add input they would be canonicalized to the same sequence. Maybe not the optimal sequence but at least the same. I didn’t check, but I suspect this is happening in InstCombine in the middle end.

Yes - this has been in InstCombine for a long time:

We could say that the canonicalization should be reversed, but that probably uncovers more missing optimizations.

The code size concern is legitimate. For example on x86, gcc asm is 2 bytes smaller on this example:

To improve this, we could add a generic transform to DAGCombiner to invert the transform that was done in IR. That transform would only be enabled with a TargetLowering hook that allows targets to decide if the constants or other factors (optimizing for size) make it worthwhile to reorder the ops.

The relevant DAGCombine is controlled by a hook: `DAGCombiner::isMulAddWithConstProfitable`.

It may be that, when optimising for size, this hook should take into account whether `c1*c2` fits into an add immediate, something that the DAGCombiner has access to via TargetLoweringInfo (but does not often use).

I imagine any change here could have far reaching consequences in terms of introducing changes across lots of targets - I'm not sure what the best approach is to handle that.


Thanks. I have checked the hook DAGCombiner::isMulAddWithConstProfitable
And I think the above condition is too aggressive.

// If the add only has one use, this would be OK to do.
if (AddNode.getNode()->hasOneUse())
return true;

Shall we make it to

if (AddNode.getNode()->hasOneUse() && TargetLowering.isCheaperCommuteAddMul(…))
return true;

The virtual hook nethod isCheaperCommuteAddMul will return true by default, but specific targets like arm/riscv can make their own decision.

Just like virtual bool TargetLowering::decomposeMulByConstant

What’s your opinion?


That sounds reasonable to me. So to be clear - the case(s) you’re looking at are related to gep expansion / address arithmetic, not general mul/add instructions in IR? I missed that from the original example. So we don’t need an inverting transform, we just need to limit the backend.

Sorry, I missed the earlier suggestion about using an existing hook to guard this.

I think that is:
/// Return true if the specified immediate is legal add immediate, that is the
/// target has add instructions which can add a register with the immediate
/// without having to materialize the immediate into a register.
virtual bool isLegalAddImmediate(int64_t) const {
return true;