Hi,
I encountered an error while writing a custom rewrite pattern. I’m new to MLIR, and currently struggling to handle it.
Error:
error: operand #1 does not dominate this use
note: see current operation: %12 = "tfl.mul"(%8, %13) {fused_activation_function = "NONE", tac.device = "GPU", tac.inference_type = "FLOAT"} : (tensor<1x1x4x2xf32>, tensor<1x1x4x2xf32>) -> tensor<1x1x4x2xf32>
note: operand defined here (op in the same block)
I’m trying to add a MinimumOp
with the code below.
It seems the error occurs because the replaced operation and the successive operation are in the same block.
While debugging, I found out the same code works with different location of replace.(commented in the code)
mlir::Value operand_depth0 = tanh_op.getOperand();
TFL::MulOp mul_op_depth1 = operand_depth0.getDefiningOp<MulOp>();
if(!mul_op_depth1) return failure();
mlir::Value lhs_depth1 = mul_op_depth1.getLhs();
mlir::Value rhs_depth1 = mul_op_depth1.getRhs();
TFL::MulOp mul_op_depth2 = lhs_depth1.getDefiningOp<MulOp>();
TFL::AddOp add_op_depth2 = rhs_depth1.getDefiningOp<AddOp>();
if(!mul_op_depth2 || !add_op_depth2) {
TFL::AddOp add_op_depth2 = lhs_depth1.getDefiningOp<AddOp>();
TFL::MulOp mul_op_depth2 = rhs_depth1.getDefiningOp<MulOp>();
if(!mul_op_depth2 || !add_op_depth2) return failure();
}
mlir::Value lhs_depth2 = add_op_depth2.getLhs();
TFL::MulOp mul_op_depth3 = lhs_depth2.getDefiningOp<MulOp>();
if(!mul_op_depth3) return failure();
mlir::Value lhs_depth3 = mul_op_depth3.getLhs();
TFL::MulOp mul_op_depth4 = lhs_depth3.getDefiningOp<MulOp>();
if(!mul_op_depth4) return failure();
float upper_bound = 10.0;
auto clipped_upper = InsertMinimumOp(mul_op_depth4.getLoc(), mul_op_depth4.getLhs(), upper_bound, &rewriter);
auto mul_converted = rewriter.create<TFL::MulOp>(mul_op_depth4.getLoc(), clipped_upper.getResult(), clipped_upper.getResult(), rewriter.getStringAttr("NONE"));
rewriter.replaceOp(mul_op_depth4, mul_converted.getResult());
// rewriter.replaceOp(mul_op_depth4, clipped_upper.getResult()); // shows same error
// Below code works!
// float upper_bound = 10.0;
// auto clipped_upper = InsertMinimumOp(mul_op_depth1.getLoc(), mul_op_depth1.getRhs(), upper_bound, &rewriter);
// auto mul_converted = rewriter.create<TFL::MulOp>(mul_op_depth1.getLoc(), clipped_upper.getResult(), clipped_upper.getResult(), rewriter.getStringAttr("NONE"));
// rewriter.replaceOp(mul_op_depth1, mul_converted.getResult());
// Below code works!
// float upper_bound = 10.0;
// auto clipped_upper = InsertMinimumOp(tanh_op.getLoc(), operand_depth0, upper_bound, &rewriter);
// auto mul_converted = rewriter.create<TFL::MulOp>(tanh_op.getLoc(), clipped_upper.getResult(), clipped_upper.getResult(), rewriter.getStringAttr("NONE"));
// rewriter.replaceOp(tanh_op, mul_converted.getResult());
return success();
And this is how the input graph looks like.
The rewrite pattern works for Tanh
and Mul
(right before Tanh
) and doesn’t work with the others.
I wonder why the associating blocks show different behavior depending on the location and how can I deal with the dominance issue with those error cases.
Thank you