[RFC] Changing interface of tensor/memref.reshape op

Current interface of reshape op takes new as shape as tensor/memref argument. This makes it harder to analyze and optimize the code (e.g. we had to allocate temporary memref just for the shape and hope it will be optimized away later). The only usecase for such interface I can think of is support of unranked types as output, but unranked types themselves are quite obscure and I don’t think much users need them.

So, I want to change reshape to take new shape as just a list of index values (and drop support for unranked output). We can introduce a separate op specifically for unranked reshape if someone really need it.

I think the consensus here is to use expand/collapse shape (see [RFC] Add explicit shape inputs to tensor.expand_shape).

Also @pifon2a who introduced tensor.reshape IIRC and recently mentioned we could/should retire it IANM ?

How can I handle case when src and dst have the same number of dimensions? And in general, I want to generate something simple from the frontend (from something like numpy.reshape) and don’t want to bother with these reassociation dims. Lowering this reshape to something lower level like expand/collase or reinterpret_cast should be the separate next step IMO.