See the previous published edition.
Welcome to the thirty-second issue of the MLIR (bi)Weekly, a newsletter covering developments in MLIR, and related projects in the ecosystem. MLIR (bi)Weekly is brought to you by a collective effort of contributors, we welcome your contributions!
MLIR Core
Infrastructure
- The forward dataflow propagation in SCCP has been refactored into a more general dataflow analysis utility, greatly simplifying the amount of work to define data flow analyses on MLIR.
- Pass Analyses can now accept an
AnalysisManager &
construction parameter to grab dependent analyses.
Codegen
- A number of patches have landed to implement detensoring in
linalg-on-tensors
. It introduces a pass to converts linalg-on-tensor ops to their equivalent primitive ops. It also introduces cost modelling for detensoring. One of the cost models aims to detect pure control-flow constructs in order to detensor them.
- An update on Linalg-on-Tensors and a bufferization strategy was shared in this RFC.
- Linalg indexed_generic unification is ongoing (RFC).
- Vector.transfer lowering refactoring is ongoing to make it more progressive and composable.
- Affine.parallel now supports min/max bounds and thus imperfectly dividing tile sizes.
- Affine parallelization can now handle reductions.
Sparse Codegen
- We have a new SparseTensorDialect which will be the “home” for anything related to the sparse compiler (operations, rewriting, attributes, passes, etc).
- We have a new attribute interface for tensor encoding (verification) and a concrete sparse tensor encoding attribute; this completes making sparse tensor types first-class citizens in MLIR!
- What to expect next: remove all the lingalg glue and clutter and replace it with proper sparse tensor types; migrate everything related to sparse into the new “home” for the sparse-related work.
SPIR-V
- Boolean std.xor to SPIR-V conversion and vector<1xT> vector.extract to SPIR-V conversion were added.
Other
- Support for updating operands and deleting operations was added to the Python API.
In the Ecosystem
IREE : An Experimental MLIR Execution Environment
- IREE switched over to use Linalg on tensors based code-generation (commit).
- Some initial regressions were caused by the code-generated by the new path not being amenable to auto-vectorization in LLVM.
- Instead, lowering element-wise operations to vector dialect recovers most of the performance (commit)
- Exploring use of im2col conversion for enhancing performance of convolution operations. Initial numbers indicate more than 2X speedup on Resnet (PR)
mlir-npcomp: Prototype for compiling numpy programs
- Basic end-to-end testing framework PR
- End-to-end execution of basic multi-layer perceptrons (matmul + tanh) PR
- First user of new upstream dataflow analysis framework! PR, commit (thanks River!)
- Significant progress on lowering ResNet PR
TensorFlow / MLIR-HLO
Kernel Generator
- Ongoing work to expand type coverage of generated operations. Working to extend complex support in MLIR and also expanding mhlo lowering for unsigned integers.
TFRT: A New TensorFlow Runtime
Auto-Fusion / JIT
- Work is underway to support more hlo operations. Initial support for pack (concat) and StridedSlice.
- Resolving issues around auto-vectorizing operations.
Recent Talks
-
The Golden Age of Compiler Design in an Era of HW/SW Co-design
- ASPLOS 2021 Keynote by Dr. Chris Lattner
- recording: https://www.youtube.com/watch?v=4HgShra-KnY
- slides: https://docs.google.com/presentation/d/1ZMtzT6nmfvNOlIaHRzdaXpFeaAklcT7DvfGjhgpzcxk/
-
Open Meeting on 04-22: EmitC: Generating C/C++ from MLIR ; slides - recording
-
Open Meeting on 04-29: Tensor Processing Primitives ; slides - recording