[RFC] Proposal for a high-level ML dialect in MLIR

I believe we need better separation (remember [RFC] Restructuring of the MLIR repo ? I sent patches but never finished it to the point of landing the changes…) but I also strongly believe that:

For example: I want us to get to have a end-to-end TOSA compiler in-tree for CPU/GPU, with integration tests, etc.
This does not have to be the only way someone compiles TOSA (or other), but I don’t see why the fact that some people may want to do it differently or don’t want to work upstream should prevent an interested set of people to collaborate in-tree.
This is also my motivation for the repo restructuring: protect the core of the project while preserving the ability to collaborate on building one or multiple end-to-end story: modularity of MLIR should also allow to do this and reuse pieces / dialects / components in various end-to-end scheme.

The problem of Coordinate LLVM commits for different project is a feature to me: people are encouraged to upstream their code to LLVM (and MLIR) because they pay a somehow high price of updating their code out-of-tree.

And the removal from OpaqueAttr from MLIR Core (builtin dialect) is even more impactful than that: we’re still working on the fixes to adjust to that right now! That’s telling of MLIR Core “stability” somehow as well (and there are a few breaking thing that may happen in Core still).

2 Likes