Following the recent changes in HLO->Linalg lowering, I’m able to almost fully bufferize a rather complex specification such as resnet without using code of my own.

However, one operation remains unconverted: linalg.tensor_reshape (which is always applied in a trivial way, e.g. tensor<1x112x112x64xf32> into tensor<112x112x64xf32>).

Here is an example of generated code where tensors subsist:

%539 = memref.reshape %535 : memref<1x112x112x64xf32> into tensor<112x112x64xf32>

Is there some automated way of converting it, too?

I’m currently using a combination of tf-opt --linalg-bufferize, which does almost all the work and iree-opt --iree-codegen-hlo-to-linalg-on-tensors --iree-linalg-on-tensors-path to remove linalg.pad_tensor. I’ve also tried tf-opt --linalg-detensorize (it didn’t seem to do much).

This is really fragile. That option was added strictly for lit testing and is not going to survive when IREE moves to use Linalg on tensors by default (soon). You are probably looking for just the pattern that lowers pad_tensor to a fill + subtensor_insert. You can just copy that over to your local pass.

You can try to enhance the bufferize pass to handle this. One caveat to consider, linalg.tensor_reshape has copy semantics, while linalg.reshape has aliasing semantics, i.e. it returns a different view of the same buffer. If you know that %536 is dead after this use, you can just do a direct replacement of the linalg.tensor_reshape → linalg.reshape.

W.R.T to automated way of doing this @nicolasvasilache has been looking at bufferization and is looking at upstreaming some of it, but its really WIP. Sorry, I dont have a better answer here.

Sorry, but in the end our solution was to write our pass which replaces the remaining linalg.tensor_reshape operations with a sequence of operations including memref.reshape. We didn’t find a way do do this using existing tools.