Further additions to the memref dialect could be:
std.tensor_to_memref → memref.bufferize_cast
Notable non-inclusions (at least initially)
Already mentioned in [RFC] split the `tensor` dialect from `std`
std.dim → split to memref/tensor.dim
std.rank → split to memref/tensor.rank
std.tensor_load and std.tensor_store need a bridging dialect
The split will be done in two steps:
During the initial split we create a memref dialect and rename/migrate the corresponding ops from std.
After the initial split, we expect to get back to the ops listed in “Notable non-inclusions”.
I would suggest to formulate the scope of the dialect without mentioning bufferization, there are flows that start at memref level and it doesn’t make sense to require them to reason about it if they need new ops in this dialect.
I don’t see tensor.dim for now. It seems to me that we can have 3 alternative solutions:
1, add a tensor.dim, is it on the road?
2, shape.GetShape + shape.GetExtent as tensor.dim
3, just use memref.dim ( it’s counterintuitive but this op can still accept tensor for now…)
I think there is a desire for #1, but it is a moderately invasive change, which is why we persist with #3. I’d be -1 on #2: it should be possible to express these forms without involving the shape dialect (and shape will often lower to a dim).