MLIR to machine IR

Hi all,

After watching a lot of presentations about the concept of MLIR and viewing all the open projects, I’m interested in the MLIR in Tensorflow.
However, it is still very unclear for me to understand the whole translation/transformation processes of converting TF Graph to machine language, such as Cuda NVMM IR by using MLIR and do optimization (i.e. the Tensorflow Graph >> Tensorflow MLIR >> [optimization] >> machine IR?)

I have successfully export the TF graph into .mlir by following an example in https://www.soinside.com/question/idrKWG7uVqWh8gu9TafpjD
Could I have a study route or suggestion to help me understanding the next step?
Thanks!

(Note that this forum is about the MLIR project which is entirely independent from TensorFlow, which is a user of MLIR. I’m still happy to answer you about TensorFlow though)

While many pieces exists, we haven’t connected a lot of end-to-end flow in TensorFlow yet. The best example of an end-to-end flow at the moment would be the TF Kernel Generator project, which was presented at one of our open design meeting a few weeks ago. You can find the slides and the recording on this page: https://mlir.llvm.org/talks/

The graph you are trying to execute should be compilable by the IREE project, which has a frontend for compiling tensorflow saved models via MLIR to a combination of CPU and GPU (via SPIR-V) and host code. We’re not really GA yet with binaries to download and such, but you are welcome to follow along one of the colabs. Most of the examples are more end2end, but the pipeline is hackable to get intermediate artifacts that you can do other things with. The best source for information on that is too ask on the project discord channel.

It is possible to do a lot of this with the stock tools in the tensorflow project, and people have gotten that to work for various models, but I am not aware of any e2e documentation for it. In addition, I don’t think the compiler pipeline on the tensorflow side supports variables or mutation, which I note that your example is using (IREE does support such graphs but not all of the kinks are worked out).

As Mehdi says, this forum isn’t really about tensorflow, but various people who hang out here do work on the other projects where the more framework specific work happens.

@mehdi_amini Thank you very much for your answer.
Yes, I noticed that the MLIR project is part of LLVM, not Tensorflow. My apologies to raise this kind of question.
I have rephrased the topic to be MLIR to Machine IR.

Another question here: I noticed in some presentations mentioned “MLIR has a 1:1 mapping between it and LLVM’s own representation, allowing MLIR to emit GPU and CPU code through LLVM”
I guess that by mapping the IR generated by MLIR, I could easily map it into LLVM and then convert the generated MLIR-LLVM IR into Machine IR supported in LLVM dialects.
For front-end specific dialect, such as TF dialect found in Tensorflow-MLIR Github repository, the generated MLIR has “” type which I guess still very high level compare to LLVM IR.
Do you have any suggestions to help me have better understanding about how to make this high level IR into low level LLVM IR?

I suspect the exact quote is about “MLIR LLVM Dialect” having a 1:1 mapping to LLVM IR.

I don’t quite get what you mean here?

You mean “tensor type”?

I would start by watching the presentation I mentioned above.

I forgot about IREE in terms of end-to-end pipeline, there are a couple of presentation about IREE on the same page I linked to in my first answer. And the IREE GitHub repo should provide instruction to work with the basic examples as well.