MLIR News, 53th edition (16th August 2023)

Welcome to the 53th issue of the MLIR Newsletter covering developments in MLIR, and related projects in the ecosystem. We welcome your contributions (contact: javed.absar@gmail.com). Click here to see previous editions.

Highlights

  • Proposal that MLIR should also have the capability of explicitly declaring the lifetime of buffers within the IR itself. Since, especially in higher-level dialects, it is improper to work with pointers within MLIR, we propose that two new ops be contributed to the MemRef dialect[RFC] Lifetime Annotations of Memory Within MLIR - MLIR - LLVM Discussion Forums.

  • Thanks Mehdi, for creating a pass that performs dialect conversion to LLVM for all dialects - that - implement ConvertToLLVMPatternInterface. [Click here for the commit]. Also, Matthias added ConvertToLLVMPatternInterface for more dialects: arith, async, complex, cf [click here for diff].

  • RFC: More OpFoldResult and “mixed indices” in ops that deal with Shaped Values - MLIR. The idea is that tensor.extract_slice, tensor.insert_slice and memref.subview support mixed SSA values/Attributes for offsets, sizes and strides. The point of this RFC is to extend this “mixed” representation to other ops that index into ranked shaped types. RFC.

  • vetor.print improved significantly. This will now enable printing scalable vector types, will do better job printing massive vectors, and improve testing. The patch splits the lowering of vector.print into first converting an n-D print into a loop of scalar prints of the elements, then a second pass that converts those scalar prints into the runtime calls. The former is done in VectorToSCF and the latter in VectorToLLVM. [click here to view the patch].

MLIR Commits

MLIR RFC Discussions

  • For anyone who hasn’t seen it and is working a lot with attributes, it’s really worth a watch Mehdi’s talk on [properties].

  • Jeremy Kun mentioned, “some people earlier were asking for tutorials aimed at complete beginners. I wrote the first few entries in a longer planned series that will cover the basics of MLIR. There’s a table of contents here: GitHub - j2kun/mlir-tutorial and I’d appreciate any feedback people have to give (both newbies who want more clarification, and experts who feel like correcting my misunderstandings”.

  • Folks seem to need frequent remind to very useful Documentation: Passes in MLIR.

  • To Question , “which optimizations that occur at the XLA stage may overlap with the passes in common MLIR optimization dialects …”.
    RESPONSE(Discord): MLIR dialects are for most of them modeling a different level of abstraction than the HLO graph, the closest equivalent is TOSA. You’ll find similar algebraic optimizations, however XLA is really powerful when it comes to mapping a graph to multiple devices. That is SPMD partitioning for example. There is no reason you can’t implement SPMD partitioning in MLIR (actually some projects did), but that isn’t part of the upstream project. Now keep in mind: the dialects that are present upstream are fully disconnected from the MLIR framework, many projects are based on MLIR and won’t use them. Modular for example is using very little dialects from upstream.

Useful Links

5 Likes