/// i1 add → xor.
if (MaxRecurse && Op0->getType()->isIntOrIntVectorTy(1))
if (Value *V = simplifyXorInst(Op0, Op1, Q, MaxRecurse - 1))
return V;
as you can see, it work for test7 but not enough test8.
but instcombine can do it. and this is code how instcombine pass optimize test8.
Value *LHS = I.getOperand(0), *RHS = I.getOperand(1);
Type *Ty = I.getType();
if (Ty->isIntOrIntVectorTy(1))
return BinaryOperator::CreateXor(LHS, RHS);
I have two questions. Why didn’t instsimplify apply the logic of instcombine as it is?
And can I use the logic of instcombine for optimize test8 in instsimplify?
InstCombine uses InstSimplify, so InstCombine is a superset of InstSimplify. The difference is InstSimplify doesn’t modify or introduce instructions. It only performs simplifications where it can return an existing value from its operands or a constant.
I look optimization logic that specific to the case where the operand is i1 in instsimplify. There are implementations for simplifyAddInst, simplifySubInst, simplifyXorInst, etc., among which:
For example, simplifyAddInst calls simplifyXorInst given an operand of type i1.
/// i1 add → xor.
if (MaxRecurse && Op0->getType()->isIntOrIntVectorTy(1))
if (Value *V = simplifyXorInst(Op0, Op1, Q, MaxRecurse - 1))
return V;
I want to add a transformation like this.
I’m wondering if this logic don’t need like you said.
For example:
/// i1 add, sub, xor can be switched.
if (MaxRecurse && Op0->getType()->isIntOrIntVectorTy(1)) {
BinaryOperator* Op0BinOp = dyn_cast(Op0);
if (Op0BinOp) {
auto Op0Opcode = Op0BinOp->getOpcode();
if (Op0Opcode == Instruction::Add) {
if (Value *V = simplifyBinOp(Op0Opcode, Op0, Op1, Q, MaxRecurse))
return V;
}
}
}
If InstCombine already handles a pattern, there is (usually) no need to handle it in InstSimplify unless you can actually move the whole fold from InstCombine to InstSimplify.