Hi,
I want to lower and bufferize the following IR
module attributes {torch.debug_module_name = "NN"} {
func.func @forward(%arg0: tensor<1x8xf32>) -> tensor<1x8xf32> {
%0 = "tosa.const"() {value = dense<[[0.87562704, 0.502409041, -0.849453688, -0.211750284, 0.853448808, 0.0886753425, 0.63562256, -0.16944395]]> : tensor<1x8xf32>} : () -> tensor<1x8xf32>
%1 = "tosa.add"(%arg0, %0) : (tensor<1x8xf32>, tensor<1x8xf32>) -> tensor<1x8xf32>
return %1 : tensor<1x8xf32>
}
}
I have written the lowering code for tosa.const
& tosa.add
and convert it to my custom
dialect with memref
dialect. But the lowering pass is not working because of my function argument and return type, this is still in tensor
dialect.
When there is no argument in the function and return type, my code is working fine.
Now, to bufferize, I have tried with
class FuncOpLowering : public ConversionPattern {
public:
FuncOpLowering(MLIRContext *context) : ConversionPattern(func::FuncOp::getOperationName(), 1, context) {}
LogicalResult
matchAndRewrite(Operation *op, ArrayRef<Value> operands,
ConversionPatternRewriter &rewriter) const override {
// just printing the operator arguments, name etc.
}
};
It is not printing anything, I feel like it is not entering into that function
I want to write func
bufferization pass like how I wrote a lowering pass to custom
dialect. Any suggestion will help me. Thanks in advance.