Torch-mlir to TOSA

Hello,
A Newbie question here. I am trying to look at the generated TOSA operator set for the provided resnet18 example starting from the Pytorch resnet18 model. I use the LinalgOnTensorsTosaBackend as shown in the example to compile the mlir model. The inputs and outputs for the generated TOSA operators are all f32 to f32; do I need to also specify zero point and scaling as pytorch input to generate an i32 to i32 operator set? It wasn’t clear to me on how to generate the quantized TOSA operator output.
Thanks

@sjarus @eric-k

Thanks for your question, @sridhark. No, the quantization information is only required when dealing with quantized networks, not for fp32 content.

Thanks @sjarus. Does that mean that if I would like to see a i32 based I/O for each of the gernerated TOSA operators, I need to provide the quantization info in the input resnet18 pytorch model. Is there such an existing example which I could try out?

Handling of non-fp types in the Torch-to-TOSA path in general is not robust and there aren’t any testcases I’m aware of with i32 inputs . The TOSA specification does enable quantized 8 and 16 bit types and there are a large number of working networks using those types in the TensorFlow Lite to TOSA path.