MLIR News, 60th edition (7th Jan 2024)

Happy New Year ! Welcome to the 60th issue of the MLIR Newsletter covering developments in MLIR, and related projects in the ecosystem. We welcome your contributions (contact: javed.absar@gmail.com). Click here to see previous edition.

Highlights and Ecosystem:

  • Kudos to Alex Bradbury, LLVM Weekly has now been published every single week (without failure) for the past ten years. Alex’s lovely write-up on the occasion -[Reflections on 10 Years of LLVM Weekly].

  • Andrzej (Arm MLIR team) shared the great news, “We have reached a very important milestone for targeting Arm’s Scalable Matrix Extension (SME) from IREE. Basically, by passing --iree-llvmcpu-target-cpu-features="+sve,+sme" to iree-compile, you can compile a linalg.matmul Op to an SME binary. While there’s no hardware available today (hopefully that will change soon), you can use an emulator to run it - it just works! his is huge! While there’s a lot of buzz around SME, it also presents some unique challenges when it comes to code generation”. [Details].

  • Lots of discussions between Chris, Renato, Stella et al. on the proposal by - Hongbin Zhang (ISCAS PLCT Lab) & Diego Caballero (Google) propose [RFC] Dynamic Vector Semantics for the MLIR Vector Dialect: “Proposal extends the Vector dialect with the concept of dynamic vectors (i.e., vectors whose length may arbitrarily vary at runtime). It defines a dynamic vector type (e.g., vector<?xf32>) and two operations (vector.get_vl and vector.set_vl) to manipulate dynamic vectors. The main focus of our proposal is to properly define the semantics of dynamic vectors. We present three generic use cases as an example of applicability but they shouldn’t prescribe or limit their usage. We also showcase RVV (RISC-V Vector Extensions) and its vector-length agnostic (VLA) model as a specific end-to-end application example. However, we envision further applicability of dynamic vectors and custom lowerings to other targets that we may explore in the future”.

  • Jianhui Li proposed adding XeGPU dialect for Intel GPUs to support high-performance GEMM code generation on Intel GPU, we propose XeGPU dialect. [click here].

  • Discussion continues on Fabian’s proposed ptr dialect to model pointer and low-level memory operations, providing a generalization of the pointer operations in the LLVM dialect, thus making the operations in the dialect reusable and interoperable with high-level dialects. [click here].

  • Discussions between Renato and Mathias on [Liveness Analysis for Bufferization deallocation].

  • Improving handling of unit dimensions in the vector dialect

  • Save the date for the 2024 EuroLLVM Developers’ Meeting! It will be held April 9-11 at the Marriott in Vienna, Austria.

MLIR Commits Past Two Weeks:

  • Uday added support for interrupt in Affine::Walk along the lines of Operation walk. [click here].

  • Alex added a new transform operation that creates a new parameter containing the number of payload objects (operations, values or attributes) associated with the argument. This is useful in matching and for debugging purposes. This replaces three ad-hoc operations previously provided by the test extension. [click here].

  • Han-Chung improved tensor.pack simplification pattern. A tensor.pack op can be rewritten to a tensor.expand_shape op if the packing only happens on inner most dimension. [click here].

  • Matthias added additional “expensive check” that verifies the IR
    before starting a greedy pattern rewriter, after every pattern application and after every folding. (Only if
    MLIR_ENABLE_EXPENSIVE_PATTERN_API_CHECKS is set.) . [click here].

  • Tobias Improved alloca handling during inlining [click here]. This changes the alloca handling in the LLVM inliner. It ensures that alloca operations, even those nested within a region operation, can be relocated to the entry block of the function, or the closest ancestor region that is marked with either the isolated from above or automatic allocation scope trait. [click here].

Useful Links

3 Likes