MLIR News, 32nd edition (4/17 - 4/30/2021)

See the previous published edition.
Welcome to the thirty-second issue of the MLIR (bi)Weekly, a newsletter covering developments in MLIR, and related projects in the ecosystem. MLIR (bi)Weekly is brought to you by a collective effort of contributors, we welcome your contributions!

MLIR Core

Infrastructure

Codegen

  • A number of patches have landed to implement detensoring in linalg-on-tensors. It introduces a pass to converts linalg-on-tensor ops to their equivalent primitive ops. It also introduces cost modelling for detensoring. One of the cost models aims to detect pure control-flow constructs in order to detensor them.
  • An update on Linalg-on-Tensors and a bufferization strategy was shared in this RFC.
  • Linalg indexed_generic unification is ongoing (RFC).
  • Vector.transfer lowering refactoring is ongoing to make it more progressive and composable.
  • Affine.parallel now supports min/max bounds and thus imperfectly dividing tile sizes.
  • Affine parallelization can now handle reductions.
Sparse Codegen

SPIR-V

  • Boolean std.xor to SPIR-V conversion and vector<1xT> vector.extract to SPIR-V conversion were added.

Other

In the Ecosystem

IREE : An Experimental MLIR Execution Environment

  • IREE switched over to use Linalg on tensors based code-generation (commit).
    • Some initial regressions were caused by the code-generated by the new path not being amenable to auto-vectorization in LLVM.
    • Instead, lowering element-wise operations to vector dialect recovers most of the performance (commit)
  • Exploring use of im2col conversion for enhancing performance of convolution operations. Initial numbers indicate more than 2X speedup on Resnet (PR)

mlir-npcomp: Prototype for compiling numpy programs

  • Basic end-to-end testing framework PR
  • End-to-end execution of basic multi-layer perceptrons (matmul + tanh) PR
  • First user of new upstream dataflow analysis framework! PR, commit (thanks River!)
  • Significant progress on lowering ResNet PR

TensorFlow / MLIR-HLO

Kernel Generator

  • Ongoing work to expand type coverage of generated operations. Working to extend complex support in MLIR and also expanding mhlo lowering for unsigned integers.

TFRT: A New TensorFlow Runtime

Auto-Fusion / JIT

  • Work is underway to support more hlo operations. Initial support for pack (concat) and StridedSlice.
  • Resolving issues around auto-vectorizing operations.

Recent Talks

1 Like