Welcome to the 72th issue of the MLIR Newsletter covering developments in MLIR, and related projects in the ecosystem. We welcome your contributions (contact: javed.absar@gmail.com). Click here to see previous editions.
Highlights, Discussions & RFCs
-
LLVM 19.1.4 Released!. Note on libc++.
-
The Tenth Annual Workshop on the LLVM Compiler Infrastructure in HPC Workshop was held in conjunction with SC24 - Monday, November 18, 2024 - Atlanta, Georgia, USA. Program.
-
A number of bits of discussions (pun-intended) on Re-thinking on approach to low precision FP types.
-
Discussions on canonicalization of linalg.generic with broadcast semantics. [click here].
-
RFC on supporting sub-channel quant in MLIR [click here].
MLIR Commits Recently:
-
Matthias added "support for bufferizing external functions that have no
body. Such functions were previously rejected by One-Shot Bufferize if
*they returned a tensor value. [click here]. -
Jakub added a fast walk-based pattern rewrite driver:
walkAndApplyPatterns
. It does not iterate until a fixpoint and does not perform folding or DCE. It should have much lower overhead than the greed pattern rewrite driver for simple pattern sets. [PR] -
Lialan added VectorEmulateNarrowType to “support loading of unaligned vectors” [click here].
-
Andrzej split GenericPadOpVectorizationPattern into two patterns – "With this change, we gain the following: 1 a clear separation between pre-processing and vectorization transformations/stages; 2. a path to support masked vectorisation for
tensor.insert_slice
(with a dedicated pattern for vectorization, it is much easier to specify the input vector sizes used in masking); 3. more opportunities to vectorizetensor.insert_slice
." [[click here]](Split GenericPadOpVectorizationPattern into two patterns). -
This [commit] adds extra checks/assertions to the
ConversionPatternRewriterImpl::notifyOpReplaced
to improve its
robustness. This change is in preparation of merging the 1:1 and 1:N dialect
conversion drivers. -
The 1:N type converter derived from the 1:1 type converter and extends it with 1:N target materializations. This [commit] merges the two type converters and stores 1:N target materializations in the 1:1 type converter. This is in preparation of merging the 1:1 and 1:N dialect conversion infrastructures.
-
[Commit] adds a canonicalization pattern for scf.forall that replaces constant induction variables with a constant index. There is a similar
canonicalization that completely removes constant induction variables from the loop, but that pattern does not apply on foralls with mappings, so this one is necessary for those cases. -
Shahid introduced transpose semantic to ‘linalg.matmul’ ops. [click here]. The main goal of this patch is to extend the semantic of ‘linalg.matmul’ named op to include per operand transpose semantic while also laying out a way to move ops definition from OpDSL to tablegen. Hence, it is implemented in tablegen.
Related Projects
Useful Links