See the previous published edition
Welcome to the forty-fifth issue of the MLIR (bi)Weekly, a newsletter covering developments in MLIR, and related projects in the ecosystem. MLIR (bi)Weekly is brought to you by a collective effort of contributors, we welcome your contributions!
MLIR Core
Infrastructure
-
Changes in MemRefType layout representation - now the layout is stored as single attribute implementing special interface (
MemRefLayoutAttrInterface
).
Codegen
- Improving Linalg Comprehensive Bufferize: Working towards an op interface (to make bufferization extensible) and decoupling the pass from Linalg dependencies.
- Sparse compiler progress:
- Sparse tensor support will be presented at 2021 LLVM Dev
- Sparse tensor output development in full progress
- The convert op now also supports the sparse to dense case (Wren)
- Several conversion tests were added, including mixed bit widths for overhead
- Posted a large number of sparse compiler starter projects on bugzilla
- Removed special case for contraction in linalg vectorization and made it go through generic op vectorization and add optimization patterns to generate the same code quality.
SPIR-V
SPIR-V utility scripts support automatically pulling in OpenCL definitions from the spec, and a few OpenCL ops were defined.
GPU
- Generalize nvvm tensorcore intrinsic support and add basic support for TF32
- Add basic support for element-wise op using tensorcore format to allow fusion of tensorcore matmul op with element-wise
In the Ecosystem
IREE : An Experimental MLIR Execution Environment
- On going work to use Linalg on tensors to vectors + bufferization on the CPU backends. Goal of this effort is to align more closely with IREE-LLVM-sandbox to allow using learnings from sandbox into IREE CPU backends (especially x86)
- Part of this work is to use the upstream Linalg comprehensive bufferize pass in IREE, but make that work within IREEs memory model.
- More improvements to default configuration used in the SPIR-V backends.
- Evaluating the way forward to get SPIR-V backend to also use the Linalg on tensors → vectors → bufferization path. Since this requires two-level of distribution, modeling parallelism for distribution at the second level has been tricky. (The first level is handled by IREE for all backends in a uniform way). Led to discourse post and ODM discussion.
TensorFlow / MLIR-HLO
Kernel generator:
- Jit mode has launched and is gradually rolling out for more kernels