What is the strategy for tensor->memref conversion? (bufferization)

@dpotop note that @_sean_silva did a bunch of refactorings and improvements to make this more progressive and composable.

See the invocation of the test-tensor-matmul.mlir in https://reviews.llvm.org/D90953 for usage.
Basically it’s a bunch of conversion patterns that need to be applied.

I’m not sure I understand. Do you mean that the patch you mentioned (and which does not include file BufferPlacement.cpp) supersedes previous work which includes BufferPlacement.cpp? This would be nice, because for the previous work I could not find a patch. BTW: is this patch able to handle function signature and return op conversion?

Also, if this is true: how can I install the patch you mention? I just have the up-to-date llvm-project repository. How can I automate the patch application process, especially if the patch has dependences? Is there a page explaining it?

You asked about the conversion patterns, the patterns are in each of the conversion pass (xxx-bufferize, e.g. func-bufferize implemented in createFuncBufferizePass). They do not live in BufferPlacement.cpp as of today.

BufferPlacement has become BufferOptimization a few weeks back, see the commit history.

Depending on how you want to “use the features used in the presentation”, you can:

  1. create a new pass, populate the conversion patterns you need.
  2. call the passes in order as is done in https://reviews.llvm.org/D90953
  3. something else

BufferOptimization seems relatively independent from the patterns. It seems the “buffer finalization” is what performs full conversion (llvm-project/TestFinalizingBufferize.cpp at f7bc56826616814a656866fd50e90a35a8e461eb · llvm/llvm-project · GitHub). FuncBufferize seems to be an implementation of that.

@_sean_silva mentioned on Friday he was going to do a presentation of the refactored pieces. In the meantime, the documentation of each xxx-bufferize seems relevant. You can just mlir-opt --help | grep -A10 bufferize to see what exists and under what name.


Thanks a lot for the previous reply. I have a final, very practical question. Assume I have the following function:

func @myfun1(%i:tensor<10xf32>)->(tensor<10xf32>) {
  %o = absf %i : tensor<10xf32>
  return %i:tensor<10xf32>

How can I bufferize it? mlir-opt --std-bufferize --func-bufferize won’t handle it.
I even thought of adding an explicit map operation around absf (instead of the implicit one in the semantics of absf) but I cannot find a suitable map operation that can be automatically bufferized.

As Nicolas mentioned, I’ve been doing some significant refactoring here. If you wait a week, this will all be much easier. I will present at ODM soon (just signed up for Nov 19 ODM “Type conversions the not-so-hard way: MLIR’s new composable Bufferize passes”)

The thing that isn’t bufferized there is the absf op, because there is no buffer equivalent. I’m waiting on review of some patches that make it work. The short answer is that you need https://reviews.llvm.org/D90731 and https://reviews.llvm.org/D90354 and run -convert-elementwise-to-linalg -linalg-bufferize -func-bufferize. See mlir/integration_test/Dialect/Linalg/CPU/test-elementwise.mlir in https://reviews.llvm.org/D90354


Excellent! Thank you!