MLIR News, 9th edition (6/12/2020)

See the previous published edition.

Welcome to the ninth issue of the MLIR (bi)Weekly, a newsletter (published on Friday) covering developments in MLIR, and related projects in the ecosystem. MLIR (bi)Weekly is brought to you by a collective effort of contributors, we welcome your contributions!




  • Type conversion now supports cast materialization for 1-1 type conversions.
  • Type conversions are now cached for efficiency.
  • <OpTy>::OperandAdaptor is renamed to <OpTy>::Adaptor and now also supports “semantic” names for a dictionary of attributes that would be attached to an operation.

Table-driven Infrastructure

Optimizations and Code Generation

  • A proposal for starting a suite of minimal “integration tests”, for now focused on the coverage of the vector dialect. These tests run MLIR programs end-to-end on CPU, which verifies that the lowering to LLVM IR for CPU yields correct code, ensures future changes do not break the lowering, supplements the documentation with real-working and illustrative examples, and provides a framework for future test suites (Phabricator review).
  • Various low level improvements for vector dialect operations: generalized vector.shape_cast lowering to all dimensions, and investigated “opt”-only failure with create mask integration tests (found with the new integration test framework); the latter operation exposed several bugs in the llvm backend in the past already, and still does not run fully clean.
  • @nicolasvasilache and @ThomasRaoux shared some high-level description of early work currently starting on the lowering from the vector dialect to GPU, in particular this includes some thoughts on using “cooperative ops”, general striding and transposes, and more.
  • StandardToLLVM conversion now [supports conversion to bf16]((
  • Affine loop fusion will revisit fusion candidates after successful fusion so that they are considered again for fusion in the context of the new fused loop nest


  • SPIR-V matrix types are now supported and SPIR-V struct types are enhanced to support more member decorations like matrix strides and majorness. These patches are from @hazem to support SPIR-V’s graphics use cases.
  • SPIR-V to LLVM conversion (GSOC project by @george) sees quite a few patches landed to set up the basic conversion scaffolding, conversions for lots of arithmetic ops, bitwise ops, comparison ops, bit shift ops.


  • A lot of work is going on with the shape dialect: canonicalization, folding, adding traits, and lowering to standard dialect for code generation.

In the Ecosystem

Flang, the LLVM Fortran Compiler

Work continues in upstreaming the FIR-related work, like recently the code related to complex expressions handling or to build DO-loops.

IREE : An Experimental MLIR Execution Environment

  • Better support for i1 crossing SPIR-V ABI boundary
  • Work to triage getting GPU tools to function with IREE based Vulkan Compute (NSight, Radeon GPU Profiler, RenderDoc). Got a flow somewhat working but high friction.
  • Some extended op support (dynamic shaped RangeOp, ShapeOp).
  • AOT LLVM HAL backend supported (vs just JIT), enabling further ongoing work for full Android support.
  • Continuing to work on lowerings and improvements needed to generate SPIR-V code that uses Nvidia tensor cores (via the cooperative matmul extension)
  • Extended the CMake build to allow taking a dependency on IREE from a project that also depends on LLVM (used for npcomp -> IREE)

mlir-npcomp: Prototype for compiling numpy programs

  • AST-based python importer + type inference (tests/examples)
  • initial work to hook up IREE as a backend
  • building out a minimal standalone npcomp runtime for testing out the npcomp e2e compilation flow