[RFC] Upstream IRDL, a dialect for dynamic dialects

While we already extended MLIR with API support for the dynamic definition of dialects, accessing this feature from within MLIR remains hard and requires additional C++ code. Hence, we propose to add proper tool support for dynamic dialects. Following the design of PDL, we define a corresponding IRDL dialect that represents IR properties, such as types, attributes, and operations, as MLIR programs. We then extend mlir-opt to instantiate dynamic dialects from IRDL dialect definitions provided at the command line. This RFC introduces the core concepts of IRDL, with the objective of building more features on top of it.

I created two patches on phabricator, the first one to introduce the IRDL dialect, and the second one to add the registration of IRDL using dynamic dialects. Reviews would be very welcomed!

Motivating example

Let’s take the following IRDL program:

module {
  irdl.dialect @cmath {
    irdl.type @complex {
      %0 = irdl.is f32
      %1 = irdl.is f64
      %2 = irdl.any_of(%0, %1)

    irdl.operation @norm {
      %0 = irdl.any
      %1 = irdl.parametric @complex<%0>

This program defines the cmath dialect, with a cmath.complex type and an cmath.norm operation. The cmath.complex type has one parameter that can be either f32 or f64, representing the scalar type backing the complex. The cmath.norm operation can only take one operand that must be a complex number, and return a value of the exact underlying scalar type. Notice how the type matched by %0 must be the same across references: if the operand is of type cmath.complex<f32>, the result must be of type f32.

The cmath dialect can directly be used by mlir-opt with the command mlir-opt program.mlir –irdl-file=dialect.irdl.mlir. It will for instance parse the following program:

  func.func @conorm(%p: !cmath.complex<f32>, %q: !cmath.complex<f32>) -> f32 {
	%norm_p = "cmath.norm"(%p) : (!cmath.complex<f32>) -> f32
	%norm_q = "cmath.norm"(%q) : (!cmath.complex<f32>) -> f32
	%pq = arith.mulf %norm_p, %norm_q : f32
	return %pq : f32

In particular, this registration is done entirely at runtime and does not require any recompilation.

How this works

IRDL-defined dialects are registered through the dynamic dialect infrastructure. On invocation of mlir-opt, we parse an IRDL file to register the dialects in the context. Then, we proceed with other MLIR files that can reference dialects described in the IRDL file.

One can see the verification process as a matching algorithm. Each !irdl.attribute SSA value correspond to a single attribute, and operations define the matching constraints. Thus, using a value twice also correspond to having an equality constraint between two attributes. In order to simplify the language and the implementation, types are wrapped in a TypeAttr, so matching a type t correspond to matching the outer TypeAttr, and then the contained type.

Additionally, C++ attributes and types can be registered to be used within an IRDL definition. This is achieved through the AttributeWrapper and TypeWrapper infrastructure, specifying the expected structure of a C++ attribute or type. Those wrappers are registered before reading the IRDL file. For example, this allows to parse the constraint irdl.parametric "builtin.complex"<%0>, which constraint a type to be a ComplexType, with a single parameter constrained by %0.

Feature list

Dialect definition:

  • irdl.dialect : Defines a new dialect.
  • irdl.type / irdl.attribute: Defines a new type/attribute.
  • irdl.parameters: Defines constraints over the parameters of the type/attribute definition.
  • irdl.operation: Defines a new op.
  • irdl.operands: Defines constraints over its operands
  • irdl.results: Defines constraints over its results

Available attribute constraints:

  • irdl.is : Check that an attribute is exactly the one given
  • irdl.any : Satisfies any attribute
  • irdl.any_of : Check that an attribute satisfies one of the given constraints
  • irdl.all_of : Check that an attribute satisfies all the given constraints
  • irdl.parametric : Check that an attribute is of a given attribute definition, and check that its parameters satisfy the given constraints


The current patch contains the core data structures and concepts of IRDL. This is by far not a complete implementation, but it lays the necessary groundwork for it. In particular, here is a list of features that would be added next (in no particular order):

  • Add support for attributes and regions.
  • Add support for custom verifiers that can be registered from C++.
  • Add support for variadics.
  • Add support for traits and interfaces.
  • Add a separate irdl-Interp dialect to compile IRDL specifications into a faster and complete verifier, similar to pdl-interp.

Remaining questions on the PR

Currently, this patch only defines a single TypeWrapper, for builtin.complex. However, users need to define their own wrappers to use the other types or attributes defined in MLIR. I feel that the current design is not scalable, because the wrappers are defined in the IRDL dialect, meaning that the IRDL dialect would depend to any dialect that defines a type or an attribute. I have currently no clear solution in mind how to solve that problem, and would be happy to hear what people would have in mind to fix this.

Mathieu and @Moxinilian


I’ve been following this development for a while and I’m glad it’s finally being pushed upstream. Defining new dialects and fighting with table-gen and C++ isn’t really productive.

I love the simplicity of defining new types and operations, but there’s a few important things that might not be trivial to implement. I’d love to hear if you already have plans to do that (or if it’s already done some other way).

Dialect to Dialect conversion

Once declared types and operations, I can “use” them in IR and the verifier will check operands, types and return values. But what happens next isn’t well defined in this RFC.

We have dialects that we lower to linalg or scf, others that lower to scf or directly to llvm. How can you control, from the IR, which ones do what?

I imagine the current prototype still depends on lowering passes that convert your new dialect into others with C++ code, which is perfectly fine.

But I imagine a day when all dialects are defined with IRDL, I’d hope we could have an operation for converting some op/type to another dialect, including LLVM.

For example:

module {
  irdl.dialect @cmath {
    irdl.type @complex {

    irdl.operation @norm {

    // Unsure of the syntax
    irdl.conversion @convert(%norm<!type> : !norm<!type>) attributes {dialects = math, arith} {
        %0 = irdl.operand(%norm) // complex<type>
        %real = %0#0
        %imm = %0#1
        %1 = arith.mulf %real, %real
        %2 = arith.mulf %imm, %imm
        %sum = arith.addf %1, %2
        %ret = call @math::sqrt(%sum)
        return %ret

A concrete conversion would seek convert functions that would take the input types (!norm<!type> means operation @norm on parametric type !type) to the destination dialects, like template functions.

Multiple convert methods can be written, for instance different types (int vs float) or different dialects (linalg or scf or llvm), and which one to choose to “lower to” will depend on what matches.

let extraClassDeclaration / builders

In a more practical point, many operations have declaration for special builders and extra functionality that is injected into the class declaration via special table-gen operations.

Is there an equivalent here? If not, how would we add extra functionality to operations, for instance, to facilitate IR generation or to carry additional information, etc.

Storage classes

How do you plan to support custom storage for operations from IRDL definitions, if needed? Both in the IRDL syntax and C++ implementation of those classes.


For the dialect to dialect conversion, this is something that should not be handle by IRDL, but rather by PDL. We can use PDL to write transformations/optimizations, and my understanding is that this is possible without compiling to C++, or at least could be implemented that way.

For the builders/storage, I haven’t experimented in generating C++, and I’m not sure what direction we would like to take relative to that. Maybe we could try to generate ODS directly, or maybe we could try to generate IRDL from ODS? But having two ways of defining dialects is probably for now not the way to go, unless we have clear plans, and input from the whole community. Otherwise, one obvious solution would be to “copy” what ODS does, and provide an operation to define builders/storage classes, where we inline C++. However, as I haven’t thought of it much, maybe there is a better way.

At least for now, IRDL only handle attribute parameters, since we can then manipulate them in a generic way.

Right, then I’d encourage this to be a core part of the design instead of an afterthought. For example, to have tests on toy dialects that convert to LLVM and even possibly execute (in mlir-cpu-runner for ex.) to check for correct outputs, etc.

I’d avoid this. ODS rules and syntax can change at any time, and it does so frequently. You’d be fighting against unrelated changes for no reason.

This is exciting to see. I’ve been using the fork with IRDL in a personal project and it’s been working well. I think this can be an important building block in some really interesting new ways to use MLIR.

Regarding the question of transformation and PDL, this is something I’d like to explore. In my the side project I mentioned, I want to use IRDL as the base layer for defining dialects, and PDL as the layer for transformation. I’d like to see IRDL and PDL compose together well. I haven’t yet started working on the PDL parts, but I’m excited to see where this goes.

1 Like

Right, then I’d encourage this to be a core part of the design instead of an afterthought. For example, to have tests on toy dialects that convert to LLVM and even possibly execute (in mlir-cpu-runner for ex.) to check for correct outputs, etc.

Yes, this connection to PDL is our long-term intended design, so it makes sense to add tests right away. I’ll add a lowering from our cmath example to arith, using the -test-pdl-bytecode-pass, to make it clear that this is possible. Note, these tests are already testing loading PDL passes at runtime, so this is something that’s already possible in MLIR.

Then, as @mikeurbach mentionned, we could probably try to improve PDL later on to provide better support for IRDL (if this is something the community wants).

1 Like

Awesome, thanks!

Nice work!

I like the dynamic aspect of IRDL (+PDL). It seems like a perfect tool for prototyping new dialects and then generating them from Python & Co without ever touching C++. I expect that this could accelerate experimentation with different dialect designs quite a bit. It also makes MLIR more accessible for users that do not have a machine that is fast enough to build LLVM (e.g. students).

The fact that a dialect definition is IR and can be transformed is another plus. I have worked on OpDSL (Linalg OpDSL - MLIR) in the past. It is DSL to define type polymorphic ML operators such as matmul and convolution. The DSL generates a yaml file that is further translated to ODS definitions to produce a large set of operations. I imagine IRDL could be useful in such a scenario. For example, it may be possible to implement a pass that transforms a dialect of type polymorphic ML operators (defined in IRDL) to a dialect of specialized floating-point, integer, or complex ML operators (defined in IRDL). Additionally, it may also be beneficial to dynamically load only the operations that a specific ML model needs, rather than precompiling the set of all possible operations.

Great to see this being upstreamed! It has been a long journey and I think it will be a valuable addition to the MLIR toolkit. Question: do you envision MLIR upstream hosting IRDL dialects?


This is something I’m not sure about yet. For now, hosting an IRDL dialect would be locking it in this dynamic world, and would make it impossible to use it from the C++ interface; we couldn’t create a new operation of this dialect with a “nice” interface, since there is no C++ class defined per operation. So at least for now I don’t have a good grasp of which dialects would make sense. Though maybe there are some uses for OpDSL, as @gysit suggested.

I think OpDSL is indeed a possible use case. It is a finished project though and I assume it only makes sense to think about an IRDL implementation once there is a need to reengineer it.

Regarding upstreaming IRDL dialects, I wonder if it is worth considering having something like a package/dialect manager that allows users to install the dialects they like (independent of the MLIR repository). I remember there have been discussion about this idea in the past and given the non C++ nature of IRDL dialects that may actually be possible now? Note I am not suggesting to implement such a package manager right now and I think the idea is orthogonal to this RFC.

Actually, I think OpDSL would be a perfect candidate to see how this can scale and replace an existing piece of infrastructure.
In particular, I have been holding off on trying to provide builtin support for composite/sequences of linalg ops because of the inherent complexity and boilerplate involved in either extending OpDSL or building the generalization in C++.

I’d be happy to see what an evolution of OpDSL written in IRDL would look like, with a clear objective of extending to such composite/sequence ops. The first such candidate ops that people have been interested in (or even started to roll-out based on manual boilerplate C++) are logsumexp, logmatmulexp, softmax and attention.

There will be some interesting aspects, that you already mention in your roadmap section and others that @rengolin mentioned.

1 Like

Discussing this with @TobiasGrosser this weekend, what I’d like to see, in the long run, is the multiple ways to create dialects (ODS, Python, YAML) to become one: IRDL.

Right now, IRDL is a dynamic process, which is perfect for experimental dialects. You don’t need to re-compile MLIR/LLVM just to change a small thing, which makes creating new dialects much easier.

But for more established dialects this has two main downsides:

  1. It doesn’t interoperate with C++ quite like ODS (by design), so more elaborate semantics and implementation can get harder / impossible.
  2. It has to “recompile” the dialect at run time, having a compile-time impact for the users of your tools.

Converting IRDL to ODS is a possible avenue, but I’d like to see IRDL directly generating the same (or similar) tables as ODS, so the two could be independent. Debugging multiple levels of indirection in a table-gen back-end isn’t fun.

So, long term, I think we should have IRDL generating C++ tables for the “core” dialects (cf, scf, func, math, arith, etc.), making others available as IRDL files in the main repository (tosa, mhlo, tcp, avx, sve, etc), and having a very easy way to import other dialects from different repositories. The split could even be a compile time option (-DENABLE_MLIR_DIALECTS to make them static).

For the import, simply cloning the repo and pointing the “include path” to the right place should be enough. Adding a “package manager” will require non-trivial infrastructure, but having a directory with IRDL files should be enough to “collect” dialects in one place. This could even be a repo under the llvm-project umbrella.

Right a directory may be sufficient in this case.

+1. TableGen isn’t great and replacing it would be awesome, but the transition process would be very painful.

On the topic of IRDL generating C++: PDL is inherently dynamic, but PDLL, the language built on top of it, has set a precedence already for an optionally dynamic system. PDLL has the option to embed arbitrary C++ code, in which case PDLL generates a mix of C++ code and PDL bytecode. IRDL(L?) can follow the same example: optionally dynamic for areas where it’s appropriate, but have full access to C++ to implement core MLIR dialects.


My hope is to reuse the same ideas as PDL/PDLL to handle custom C++ constraints, so we can still be turing-complete if we want to. This is one of the next patch I would like to add on IRDL.

I would be careful though on thinking that we could replace TableGen with IRDL. While many people have issues about TableGen, we should first understand what are these issues, and why/how could IRDL resolve these.


Right, I didn’t mean to put pressure on this work. Your current plan is solid and very relevant to MLIR as it is.

My point about replacing table-gen with IRDL isn’t about IRDL or MLIR per se, it’s about table-gen in general. After many years “understanding what the table-gen issues are and trying to resolve them”, I think it’s pretty unanimous that we should find better technologies for whatever is possible.

And even if we get to a point where we start replacing existing table-gen dialects with IRDL/PDL/PDLL, it’ll take years to get rid of table-gen in MLIR, so this should put no pressure at all on this work for the foreseeable future.