[doc] mlir-translate / mlir-opt

Chatting with @River707 today we figured there is a lack of doc about some of the project fundamentals, in particular mlir-opt and mlir-translate. The former is involved in the Toy Tutorial, but not really explained as the tutorial assumes prior knowledge of LLVM (and so LLVM’s opt). The chapter on exporting to LLVM IR involves calling the LLVM Translation but does not talk about mlir-translate or the registrations.

I think we should add more doc about navigating the project structure but also the basic tools we are using their role.
Before I jump in and start documenting these, it also came up that we may not all be in sync with respect mlir-opt vs mlir-translate differences, which despite their similarities are vastly different.

mlir-opt is a testing tool intended to exercise a pass or a sequence of passes. It always load some MLIR IR input and emit some other MLIR IR as output.
When extending it downstream, an entry point is exposed so that users can inject a DialectRegistry to make available their own dialects (and interfaces) to the parser. Passes define the dialects they produce already.

mlir-translate has a less clear contract by nature: it intends to operates at the MLIR boundary, where it’ll tests functions that are either taking MLIR IR as input and producing “something else” (export) or taking “something else” as input and producing MLIR IR (import).
Even though there is a main entry point like for the *-opt tools, the export and import path have different API contracts. In particular, only the export case is concerned with parsing MLIR IR and the import case requires much less setup. Right now the registration for an export entry point (like mlir-to-llvmir) can inject a DialectRegistry pre-parsing to define the set of inputs allowed.
This lead to a subtle but quite fundamental difference between the two tools and in particular the role of the registration for each of them: mlir-opt has a very controlled interface and passes with a narrow API (IR in / IR out) are registered while mlir-translate registers translation with an interface that much “raw” (process buffers). It is almost as if every translation registration define its own “subtool”!
It almost makes sense to me conceptually: mlir-to-llvmir has not much in common to tflite-to-flatbuffer or mlir-to-bef (for TFRT).

3 Likes

More docs and clear distinctions between the various tools would be great :smiley:

FYI, we have this page in IREE’s developer docs talking about our downstream *-opt and *-translate tools: Developer Overview > Developer Tools.

Interesting, yeah, I guess this was just “known” but never written down, and now we realize different folks “know” different things :slight_smile:

I see mlir-translate derived tools as serving the problem of “I want to be able to unit test implementations of the mlir/Translation.h interfaces”. It has generally worked well for this in my experience, e.g. CIRCT uses it for converting from .fir files to .mlir, and from .mlir to verilog effectively.

One gap I’m seeing is that we apparently don’t have translations that work with arbitrary MLIR files. For example, if you want to test an “MLIR to XML” for some reason or maybe more reasonably “MLIR to future binary MLIR format”, the translation itself wouldn’t be tied to any specific dialects, but you’d want to be able to write testcases that use some dialects specific to your domain. As such, it is important to be able to register the dialects in places outside the TranslateFromMLIRRegistration registry hook.

Likewise, in terms of serving as a testing tool, for the same reason, it is nice to be able to say that “all uses of this specific tool should allow unregistered dialects” just so I don’t have to always pass the flag. It just clutters up the RUN line and isn’t making the tool more useful for testing, it is just overly opinionated IMO.

To be clear, I don’t see mlir-translate tools as needing to serve the “tool normal users use to translate things” problem. In the case of CIRCT, we have a tool called firtool which incorporates multiple translations, does autosensing of input, does optimization passes, and has “user level” exposed flags. This is clearly a different thing than mlir-translate!

-Chris

2 Likes

Great! I’ll take inspiration from it to provide a similar doc upstream!

The way I would see it is that you wouldn’t have a mlir-to-xml registration but instead a circt-to-xml translation registration that would just register the circt input dialects and call the mlirToXml translation function.

Similarly, the --mlir-to-llvmir translation registration is really only a translation for the upstream dialects implementing LLVM IR translation (LLVM, NVVM, ROCM, OpenMP, AMX, …).
If someone downstream wanted to translate to LLVM IR, I would think they would provide another registration with the appropriate name, and copy and adjust this code.

Yeah I can imagine that it can make sense for a translation to opt-in "allow unregistered dialects”, but I would add this to the registration of this particular translation: mlir-translate --mlir-to-llvmir would not have it but mlir-translate --generic-mlir-to-xml would!

Thanks for doing this, really glad to see some effort in this direction!

I was looking at the source of mlir-translate today and my immediate reaction was - “oh, that’s for translating from/to MLIR to/from SPIR-V”. I couldn’t find any other documentation that would clarify that there’s more than that :confused:

-Andrzej

1 Like

I’ve struggled over the years with whether *-translate is primarily a compiler writer’s tool vs a utility for creating “starter tools”. I’m personally in the former camp, but most of the usage that I see is in the latter. I generally find that beyond a certain maturity level, you probably want your own driver/tool and not be indirecting through something like this (i.e. when you really care about something that users are going to need to grok, you want much tighter control over flags and main-level things, and the registration system seems to be an anti-pattern for such things).

If we are talking about cleaning up the *-translate utilities, though, the way that translations are specified on the command line has been a source of confusion for folks: since they are free-standing options, they are hard to pick out in a help screen, and I’ve found no practical way to set a default translation.

But again, with my personal opinion in play (i.e. these are primarily for testing and developing the compiler), I’d be fine with an answer of “if you care about flag ergonomics for users, build your own tool and don’t use *-translate”. I just think that how we position these should be called out specifically in the docs so that we know what level of changes are merited.

1 Like

But this is isn’t actually great, and isn’t properly layered. To dive into a specific example, CIRCT is a collection of a bunch of different dialects with different communities (they each have/use different dialects that compose and mix in with the HW dialect). Your approach would work, but would entail registering the translation one time for each community, or one monolithic registration that links all the dialects that might be useful.

Why is this better than what I’m suggesting? The testing tools/ are implicitly monolithic already - they statically link a fixed set of stuff, by design. Their .cpp file should be the one that decides what that tool links in, it shouldn’t be the translations themselves doing it, and it shouldn’t be each of the clients of the translation redundantly registering the translation with different names.

-Chris

1 Like

What “layer” are you seeing wrong here? I’m not perceiving it right now.

The problem with what you’re suggesting is that the accepted input isn’t a property of the tool but a property of the registration.
I’m also not sure I see how it serves your example better?
Now instead of adding multiple registration for each of the cases, you need to write as many tools as you have use-cases?

I’m seeing a difference between “what is linked in” with “changes in behavior”: this was a strong reason to move away to all the global registries for many things. That makes it so that having more dialects linked to the tool won’t change the behavior in any way!

You’re losing me into what you’re suggesting here?

For some reason I thought I responded to this, but it looks like I didn’t actually. Let me go through an example:

Consider an “mlir to XML” or “MLIR to binary-mlir” translator. The nature of the translator is that it is dialect independent - it operates based on the core MLIR structure of operations, regions, attributes etc, without interpreting them in a dialect specific way.

This translation therefore shouldn’t register any dialects in its TranslateFromMLIRRegistration hook. The problem with this though is that no dialects get registered when using an mlir-translate style tool, so they all have to be /written/ with generic operator syntax in the .mlir file, even if the example test cases are using well known things like arith, scf or whatever. particularly for region-based ops this is a huge pain. It also causes implementation coupling between the exact MLIR representation of an op and the tests - “pretty syntax” allows somewhat more freedom in the implementation (e.g. you can rename an attribute without breaking all tests).

This is a bad thing IMO, and we need to fix it, because the core purpose of the mlir-translate tool is to enable testing of translations. The challenge here is that there isn’t any set of dialects that are actually correct here, and putting something into this introduces incorrect coupling between the translator and whatever dialects get registered.

I see two paths:


Path 1: If I understand your proposal, you are suggesting to move the translation registration logic away from the translator itself, and put it in each tool. For example, mlir-translate could register a convert-to-xml translation with its own TranslateFromMLIRRegistration implementation that registers affine, scf, arith, etc.

circt-translate would register a convert-to-xml-circt translation that has its own TranslateFromMLIRRegistration that includes hw, comb, esi, …

iree-translate would register a convert-to-xml-iree translation that has its own TranslateFromMLIRRegistration that includes all its stuff etc.

This can work, but leads to lots of duplication. We’d presumably also have a convert-from-xml-foo version of each of these, and the only difference would be whether they print in pretty form or generic form of the operators. All of these domain specific translators would have to register the dialects in their own hooks, leading to more duplication.

It is also suboptimal that we’d now have two different ways to register translations: one way for generic translations and another for everyone else. We clearly wouldn’t push the burden of registering translations on all the tools for everything.


Path 2: We can allow mlirTranslateMain to take an optional DialectRegistry, fill those once in in circt-translate.cpp (et al) and have those set up for all translations. Generic translations now have a single name and are registered in a single consistent way regardless of whether they are generic translations or not.

The domain specific tools have to specific the set of dialects they care about exactly once, for all the translations they have to perform.

This is simple, and is what I (perhaps too quickly) landed in this patch.


There is a second issue which is that generic translations and domain specific translate tools can have other considerations as well. For example, when writing generic translations, it is common that the tests are mixed in both registered and unregistered dialects (to get to the weird things). In this case it is helpful to be able to set -allow-unregistered-dialect to be on-by-default for the tool, which requires an MLIRContext.

Does this make sense to you?

I’m sorry again for dropping this thread, for some reason I thought I responded already!

-Chris

Path 2: We can allow mlirTranslateMain to take an optional DialectRegistry , fill those once in in circt-translate.cpp (et al) and have those set up for all translations. Generic translations now have a single name and are registered in a single consistent way regardless of whether they are generic translations or not.

I see some important layering issues with path 2 right now:

  1. One problem with this model is that translations behaves differently now depending on some external factors (the tool setup). Path 1 has the property that for a given translation registration (mlir-to-xml for example) then xxx-translate --mlir-to-xml has always the same behavior regardless of the tool. This is a property of *-opt tools in general: the tools control what they register but won’t change the meaning of a given registration.

  2. The tools are registering multiple translations, but by adding some Registry or other options at the top-level of the tool instead of inside the registration, you’re actually affecting all of the translations even if they are unrelated. Path 1 keeps the translation registration isolated from each other (which is a critical aspect for testing!), and your patch is breaking this.

  3. There is already a dialect registry provided by the translation registrations, but you added one of the top-level tool. They are quite redundant, there is no good way for a tool author to really know how to use one or another or how they interact with each other. This is also increasing the complexity of the system here. All in all: a quite good indication that there is something not right with the model right now.

Right: as commented on the patch, I believe that AllowUnregisteredDialect should be a single flag set by the translation registrations. It does not belong to the tool for the same kind of reasons I mentioned above.

Tangentially related: having something akin to allowUnregisteredDialect on the DialectRegistry does not seem completely wrong to me.

1 Like

Path 1 does give you that, but I’m not sure why that’s beneficial? We don’t even have that with *-opt tools, given that the behavior can change depending on the input. You can’t generally take take IR from one *-opt tool and feed it to another and expect it to behave exactly the same way. Different variants of mlir-opt may have different dialects/extensions/etc. registered that don’t exactly carry over. The passes are still technically the same, but you don’t necessarily get the same behavior. I don’t see what is different about testing translations (we don’t need N number of pass registrations for every combination of dialects).

Does it? We aren’t calling the translation registry (it’s just for the translate tool) in any kind of production environment, and in those cases the dialects registered/loaded will be different from the tool setup anyways. How does this affect testability? Registering dialects like we do in mlir-opt hasn’t affected the testability of passes AFAIK.

– River

Seems like it could be reasonable, but yes it is only tangential here.

I see it as fairly important that the behavior is well defined and contained in one place: the registration for the translation. This is a matter of orthogonality and composability, I see it on the same scale of principles of design we always followed elsewhere actually, I’m not totally sure what is controversial here really.

The IR as input to a particular pass should behave the same way always. This got broken recently because of external registration and we considered this a bug that lead to the proposal about promises.

As mentioned at the very beginning of this thread: I see a fundamental difference between *-opt and *-translate: I believe that when we do side-by-side comparison, the role of translation registrations is best compared to defining a new *-opt tool than anything else. This is what I elaborated in the first post of this thread by explaining how each translation registration defines it own “subtool”.

How doesn’t it? Any of the thing that you put in the main() applies to all the registered translation: this seems to me to be fairly obviously breaking the isolation that path1 provides. I’m not sure what’s not clear to you here?

Again: you’re missing the fundamental point I made about translation being their own “subtools” (the DialectRegistry they populate being a clear sign of it). So the mlir-translate main() is nothing comparable to *-opt dialect registry to me.

That doesn’t cover the use case of downstream users attaching extensions that can’t be promised (i.e. the interface is not accessible to the dialect to promise it in the first place).

I guess the big point I’m missing is why we need to view it that way. The duplication that Chris mentions upthread seems very unfortunate for what is supposed to be a developer tool, to make testing easier for developers, and not something for production. We are making it more difficult, and the benefit just isn’t clear to me (not that I’m stating there is no benefit, I’m just missing it if there is).

Feel free to propose a rebuttal to my original post: I proposed my own reading of how I believe things are modeled right now, I might be wrong :slight_smile:
I really believe that this isn’t just about a particular implementation choice that is just fungible, and instead this is fairly fundamental discussion about what the problem is and how to model it.

I guess I don’t see a significant difference about this either way: the amount of code to write seems trivial and tiny to me. Adding a custom registration for a developer use-case does not seem like a hurdle to setup: it seems like a reasonable one-time cost.
Also, the soundness of the model takes precedent to me in general. It seems like the same arguments faced in the past when we pushed things that were much more intruding and obnoxious. In particular, MLIR started with global constructor for registering everything, and I remember how it was seen important that developer got the ability for any project to setup their own *-opt tool with a simple build rule without writing any code.
We made it much harder when removing all these global pass registrations and forcing people to write a main() of their own and some boilerplate (much more than what’s involved in a translation registration actually). We then simplified some of the boilerplate, in particular with adding support in TableGen to define passes, but that came later and there are still missing pieces (createXXXPass() functions are still manually written and we still have this issue for better handling of options).

+1 , it was part of goal when added (Tool for translating from/to MLIR. · llvm/llvm-project@47c7df0 · GitHub ) the other for me is the ability to experiment with what a dedicated tool would look like by piping these together translate | opt | translate level.

I see it 100% in the former category in line with the above. It was meant as a very simple tool to do one thing in isolation. Hooking it up into other things/stretching it doesn’t lead to nice ergonomics IMHO (yeah it’s useful as starter tool for a week or two maybe :-)). We could improve some of those (make it a bit more proper subtools, error reporting, CLI flag separation), but that goes away from the goal for me: making it simple to test translations simply.

Accepted input is a property of the tool still as the tool dictate what is registered. When one creates a translate tool, you set the dependencies and what is registered. Now it would be a failure if we did some random registrations by having libraries and targets that comingle things (e.g., the cases where this breaks down seem like bugs).

--mlir-to-xml only would differ wrt behavior in terms of accepting custom syntax or unregistered dialects. That doesn’t seem a core property of the translation to me. The translation only requires MLIR module (well some top-level construct) in memory, what dialects may be loaded is not relevant to it, that is the responsibility of the tool. Either the file is loaded or not, the translation happens if loaded and does so uniformly no matter which dialects are involved. So one would test this translation & have less unrelated noise in its registration (e.g., it shouldn’t care about dialects) and I’d still need to use the correct translate tool for a given input, vs needing additionally a different registration per tool which doesn’t seem to add value wrt testing/intended use (it is uniform behavior for same registration in different tools but that’s not a value add here, and it makes a peripheral behavior uniform). And I’d be very sad if we had a tool with circt-to-xml tf-to-xml tfg-to-xml options with these different translations not composing while doing nothing dialect specific during translation.

I see it actually being worse for orthogonality in the above case: it co-mingles requirements for loading with translation being tested.

Indeed, and it is still harder than what it was. I think the biggest issue that was resolved there was that we enabled folks to create tools without a global registry, or stray visible symbols, or inhibiting DCE’ing or … and it resulted in cleaning up build deps. So very useful. But that is seperatable from having a testing tool that uses a registry though.

I agree with formulation of these really being subtools from the point of view that there is very little structure/similarities between registrations. But I differ in how isolated I see them, basically what constitutes the environment they operate in and the core translation functionality being registered. And here in particular I see the loading into memory from file aspect to be separate from the transformation and part of the environment dictated by the tool (heck even definition of file you’d see in the original commit was assumed to be a function of the tool, only requirement was that it could be unambiguously named).

Seems like a play of word to me: you’re not fundamentally changing anything to what I wrote: the input is a property of the translation and/or its registration, the tool is just indirectly controlling it by enabling or not a translation to be registered.

Not only: dialect registry is loading interfaces as well.

Right: I don’t think “making it easy” should be the main metric to optimize, having the right model is more important in terms of software design to me, let’s not take shortcuts for superficial convenience.

I fail to see how it is the responsibility of the tool to load the input file into memory for --spirv-to-mlir for example.

Just this is a strong enough indication that having a DialectRegistry on the tool instead of the registration is absolutely not right.

Ok, nerd sniped…

Isn’t the main metric of this tool to let us have the minimal amount of binaries and linkage overhead for a project to be able to test translations between arbitrary stream based representations and corresponding MLIR? The translation registrations are just a way to “busybox” multiple such things together into one binary and give them a name.

I don’t think there should be much design or “framework” beyond that. Also, while we don’t often do it, there may be cases where I want two otherwise same translations that configure the context differently so that I can test (ie. With and without some dialects registered, or allowing/not unregistered dialects). We do get bugs on those axes but rarely go to the trouble of testing at that level today.

Right: “busybox” is likely the best analogy into how I see the translations!

Right! I believe this is better accomplished by either:

  • adding command line option to a translation.
  • adding a new registration for a translation.
    rather than having to provided an entirely new binary for every variation.

I don’t think that’s true? The difference here is the .mlir parser, the translation itself is the same. Further, MlirOptMain already takes a DialectRegistry as an argument, so it has the same behavior if the specific tool prepopulates it with domain specific dialects. This patch makes mlir-translate family tools comparable to mlir-opt.

Even if it were the case, I don’t see how the property you’re describing is actually useful. For example, if “mlir-opt -cse” and “circt-opt -cse” were the same, then “circt-opt -cse” would reject the pretty form of hw.module - that isn’t useful. I don’t see how translation tools are any different.

The only apparent difference between mlir-opt and mlir-translate is that we have a lot more optimization passes than translations in the community, and the muscle in mlir-translate hasn’t been built.

You are correct, it would be registering it for all translations in that tool, just like mlir-opt does for all passes. I agree with you that this is a change, but I don’t see what the problem is. If you want to test an MLIR translation thing without circt dialects then use mlir-translate, no one is taking that tool away.

I agree this is adding complexity by adding a “new thing”. I am trying to justify that with use cases that are underserved. I don’t think that the alternate path is actually better.

I don’t understand why you would think that. Can you elaborate?

-Chris