Issue with PyTorch to MLIR Compilation using torch-mlir: Missing Input Representation

Hello everyone,

I hope you’re doing well. I’ve encountered a problem while working with torch-mlir to emit MLIR from a PyTorch model. When using the torch_mlir.compile method, one of the inputs required is the model’s input. However, upon performing the compilation, I am obtaining a representation of the computation performed by the model, but the input representation is missing. Consequently, I cannot run the resulting file after wrapping it in a @main function and lowering it to LLVM IR dialect.

Here is an example of what I’m getting :

#map = affine_map<(d0, d1, d2, d3) -> (d1)>
#map1 = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>
module attributes {torch.debug_module_name = "foobar"} { private mutable @global_seed(dense<0> : tensor<i64>) : tensor<i64>
  func.func @forward(%arg0: tensor<1x3x224x224xf32>) -> tensor<1x64x222x222xf32> {
    // a bunch of computation
    // ...
    // return %foo : tensor<1x64x222x222xf32>

I am wondering why torch-mlir does not represent the input as well. It seems to only capture the computation but not the input information. This poses a problem for me, especially since I am working on a pipeline to automatically convert numerous PyTorch modules to MLIR. I need the input representations to be included automatically in the generated MLIR code. Adding inputs manually is not an ideal solution as I am aiming for a fully automated process.

If anyone has insights into why the input representation is missing or if there is a workaround to include it automatically during the compilation process, I would greatly appreciate your help. Thank you in advance for your assistance!

By inputs, you mean the actual data passed into the computation at runtime?

Yes @jpienaar . For example, the input I want here is input_tensor :

compiled = torch_mlir.compile(computation, input_tensor , output_type="linalg-on-tensors")