Choosing the dialect for generating MLIR

Right now Torch-MLIR generates MLIR using the following list of dialects: arith, std, scf, linalg and tensor - from function mlir_module.dump() from here

I was wondering can I choose which dialect I want my pytorch code to be generated in (i.e. just generate a MLIR which only uses the arith dialect, a MLIR which only uses the std dialect, and so on) ?

That mix of dialects (used by linalg-on-tensors backends) are pretty orthogonal, so what dialects are in the output depend on what the program is doing (is it adding two numbers? is it adding two tensors? does it have an if statement, etc.).

What dialects are you wanting Torch-MLIR to output? Also, we can provide you IR that is only in the torch dialect if that is easier.

1 Like

Thanks for your answer! No I was looking at the complex example provided here. The MLIR generated from this example uses multiple dialects (namely tensor, arith, standard and linalg).

Yes that would be great! I did try the various passes on the example RESNET18 code provided, but it always produces the dialect mix. I was looking for a conversion to a single dialect (such as you mentioned tensor) or dialects which are much more common in the MLIR community (something like arith, standard and affine).

I believe that mix of dialects (arith, linalg, tensor) is the most common in the MLIR community right now.

If you want the input in the Torch dialect, you can remove “torch-backend-to-linalg-on-tensors-backend-pipeline” from the set of passes run during lowering.

1 Like

Okay, thank you!