MLIR News, 22nd edition (12/12/2020)

See the previous published edition.

Welcome to the twenty-second issue of the MLIR (bi)Weekly, a newsletter (published on Friday) covering developments in MLIR, and related projects in the ecosystem. MLIR (bi)Weekly is brought to you by a collective effort of contributors, we welcome your contributions!




Optimizations and Code Generation

  • Vector dialect improvements for architectural-specific features:
    • Fixed composition and code duplication issues by making AVX512 lowering a “subpass” of architectural-neutral vector dialect lowering
    • More architectural-specific vector dialects are being added using same approach: ArmNeon and ARMSVE
    • Allows mixing arbitrary architectural-specific dialects with the architectural-neutral vector dialect, for your AVX512-enabled ARMNeon optimized processor :slight_smile:
  • ArmNeon dialect has landed. Discussion ongoing to automate op creation as possible.
  • Linalg on tensors: tile-and-fuse on tensors in progress. IREE and XLA starting to experiment with the approach.
  • Sparse compiler progress
    • Added reduction “scalarization” feature, which avoids loading/adding/storing from buffers in an innermost chain of for-loops
    • Made minor improvements (mark tensor indices as sparse/dense/undef, pre-compute simplifications rather than doing it repeatedly during codegen)
    • This prepares the next planned feature: vectorization


In the Ecosystem

CIRCT : Circuit IR Compilers and Tools aka ‘MLIR for hardware’

  • Handshake dialect updates to merging and branching ops are done.
    • This means these ops can all be emitted as System Verilog now, closing a long-standing issue.
    • Infrastructure for equivalence checking using yosys has landed.
    • cmake exports for projects that depend on CIRCT has landed
    • A number of patches improving modeling of SystemVerilog interfaces have landed.

TensorFlow / MLIR-HLO

Progress on XLA GPU backend:

  • GEMM and Conv migrated to take LMHLO.
  • Reduce is fully migrated to take LMHLO.
  • All elementwise ops are migrated to take LMHLO.

Kernel Generator Project:

  • Use linalg on tensors for fusion now.
1 Like