WIP - MLIR News 72th Edition (26th Nov 2024)

Welcome to the 72th issue of the MLIR Newsletter covering developments in MLIR, and related projects in the ecosystem. We welcome your contributions (contact: javed.absar@gmail.com). Click here to see previous editions.

Highlights, Discussions & RFCs

MLIR Commits Recently:

  • Matthias added "support for bufferizing external functions that have no
    body. Such functions were previously rejected by One-Shot Bufferize if
    *they returned a tensor value. [click here].

  • Jakub added a fast walk-based pattern rewrite driver: walkAndApplyPatterns. It does not iterate until a fixpoint and does not perform folding or DCE. It should have much lower overhead than the greed pattern rewrite driver for simple pattern sets. [PR]

  • Lialan added VectorEmulateNarrowType to “support loading of unaligned vectors” [click here].

  • Andrzej split GenericPadOpVectorizationPattern into two patterns – "With this change, we gain the following: 1 a clear separation between pre-processing and vectorization transformations/stages; 2. a path to support masked vectorisation for tensor.insert_slice (with a dedicated pattern for vectorization, it is much easier to specify the input vector sizes used in masking); 3. more opportunities to vectorize tensor.insert_slice." [[click here]](Split GenericPadOpVectorizationPattern into two patterns).

  • This [commit] adds extra checks/assertions to the
    ConversionPatternRewriterImpl::notifyOpReplaced to improve its
    robustness. This change is in preparation of merging the 1:1 and 1:N dialect
    conversion drivers.

  • The 1:N type converter derived from the 1:1 type converter and extends it with 1:N target materializations. This [commit] merges the two type converters and stores 1:N target materializations in the 1:1 type converter. This is in preparation of merging the 1:1 and 1:N dialect conversion infrastructures.

  • [Commit] adds a canonicalization pattern for scf.forall that replaces constant induction variables with a constant index. There is a similar
    canonicalization that completely removes constant induction variables from the loop, but that pattern does not apply on foralls with mappings, so this one is necessary for those cases.

  • Shahid introduced transpose semantic to ‘linalg.matmul’ ops. [click here]. The main goal of this patch is to extend the semantic of ‘linalg.matmul’ named op to include per operand transpose semantic while also laying out a way to move ops definition from OpDSL to tablegen. Hence, it is implemented in tablegen.

Related Projects

Useful Links

2 Likes