Thanks for the quick reply.
Yes, this is on purpose.
to_memref are tricky because we cannot analyze tensor SSA use-def chains through these ops. Therefore, input IR that has these ops does currently not bufferize.
That’s what I suspected. Thanks for clarifying!
If possible, you could run One-Shot Bufferize first, then any other bufferization that you need afterwards.
This is probably the only way forward for now: instead of calling external functions, preserve tensor semantics using some proxy operation, which then gets lowered to a function call with a memref upon bufferization.
You can also specify a filter in
BufferizationOptions to exclude certain ops from One-Shot Bufferize.
I fear this won’t be applicable, since in our use-cases buffers may get used by operations which are subject to one-shot bufferization afterwards.
Alternatively, you could run One-Shot Bufferize without an analysis. However, that would mean that every op that is writing to a buffer will first make a copy of the buffer (alloc+copy), which is probably not what you want.
One goal of including the one-shot bufferizer into the pipeline is to reduce copies. Not using the analysis will probably end up worse or on-par with the current, naive bufferization.
We could also try to extend the analysis of One-Shot Bufferize to support to_tensor/to_memref, but it would likely have to be quite conservative and insert copies in many places. We did not have use cases for this until now, so I did not look into this much further.
This would be a quick win from our perspective, but I understand that this might be far from trivial for the general case.
Based on your IR example, it looks like you are bufferizing function boundaries with a different bufferization. Are you using
--func-bufferize by any chance?
No, the IR contains calls to external functions which operate on memrefs (or rather bare pointers extracted from the memrefs). These functions are implemented in Rust and are compiled completely separately, and therefore remain opaque to the IR.