This Thursday (9am California Time, 17:00 UTC ), @matthias-springer will talk about bufferization: the process of materializing tensor values into memory.
MLIR currently has two solutions for bufferizations:
-
“Core” bufferization is implemented via multiple passes, each of which bufferizes a part of the input IR (partial bufferization). It conservatively inserts buffer allocations/copies on every memory write and relies on subsequent memref-based analyses/passes to remove unneeded allocations/copies. See the original presentation from 2020-09-24 on this approach (slides - recording)
-
Linalg Comprehensive Bufferize is a new bufferization that bufferizes entire functions in “one shot” (single pass). It analyzes use-def chains of tensor values/ops (as opposed to memrefs) to determine if buffer allocations/copies are necessary (before modifying the IR). It often produces fewer buffer copies than core bufferization, especially when the input IR contains matching
tensor.extract_slice
/tensor.insert_slice
pairs, as is often the case after tiling.
In this talk, @matthias-springer will give an overview of the new one-shot bufferization, how to use it, and how it can be extended to support new ops. Finally, he will also discuss plans for unifying both bufferization solutions.
As usual the information to join the meeting:
https://meet.google.com/aue-vgas-egu
+1 218-301-8485 PIN: 255 745#
I’ll also update this thread with slides and recording after the meeting.