Tf_executor dialect description

The dialects included with the LLVM/MLIR distribution are well documented. For each one, we know its purpose, and for each type and operation its semantics.

I can’t find the same information for the dialect tf_executor. Do you know if it exists?

The tensorflow dialect named tf_executor is defined in and there are inline descriptions of the ops. I am not aware if a place where this gets published as rendered html, though.

TensorFlow MLIR is where the rendered forms are and for description of these you can also refer to the TF dialect design review/RFC TF side.

Here is the original design if you missed it:

1 Like

Thanks again @joker-eph , the original description of the tf_executor dialect was extremely useful. It’s quite outdated now, but I have been able to take the small code example and update it to a version that can be converted to a .pb file and then run using the current version of tensorflow (code provided below).

Would someone be willing to help me in extending this with explanations concerning the semantics of the dialect? The idea would be:

  • for me to ask questions on topics I still don’t understand and place my understanding in a .md file
  • you to check the correctness of what I write and clarify various aspects.

Topics I have difficulties with are frames and the use of Enter/Exit, locations, and a few others (and the various links provided here do not provide a true semantics description, they only seem to be a reference for people that already understand how the whole thing works).

Here is the updated code. I can also provide a conversion and execution script.

module attributes {tf.versions = {bad_consumers = [], min_consumer = 0 : i32, producer = 0 : i32}}  {
  func @main() {
    %fetches = tf_executor.graph {
      %count.init, %control_count = tf_executor.island
        wraps "tf.Const"() {device = "", value = dense<32> : tensor<i32>} : () -> (tensor<i32>)
      %minusone, %control_minusone = tf_executor.island
        wraps "tf.Const"() {device = "", value = dense<-1> : tensor<i32>} : () -> (tensor<i32>)
      %zero, %control_zero = tf_executor.island
        wraps "tf.Const"() {device = "", value = dense<0> : tensor<i32>} : () -> (tensor<i32>)

      %count.init.myframe, %ctl0 = tf_executor.Enter %count.init frame "myframe"
        : (tensor<i32>)->(tensor<i32>,!tf_executor.control)
        {T = i32, device = ""}
      %minusone.myframe, %ctlX = tf_executor.Enter %minusone frame "myframe"
        : (tensor<i32>)->(tensor<i32>,!tf_executor.control)
        {T = i32, device = ""}
      %zero.myframe, %ctlY = tf_executor.Enter %zero frame "myframe"
        : (tensor<i32>)->(tensor<i32>,!tf_executor.control)
        {T = i32, device = ""}

      %next_count, %tok, %ctl1 = tf_executor.NextIteration.Source : tensor<i32>
      %loop.body.init,%dontknow1,%ctlMerge = tf_executor.Merge %count.init.myframe, %next_count : tensor<i32>
        {N = 2 : i64, T = i32, device = ""}
      %dec_count, %ctlAdd = tf_executor.island
        wraps "tf.Add" (%loop.body.init, %minusone.myframe) {device = ""} : (tensor<i32>, tensor<i32>) -> tensor<i32>
      %loop_cond, %ctlNE = tf_executor.island
        wraps "tf.NotEqual" (%dec_count, %zero.myframe) {device = ""} : (tensor<i32>, tensor<i32>) -> tensor<i1>
      %true, %false, %ctlSwitch = tf_executor.Switch %dec_count, %loop_cond  : (tensor<i32>,tensor<i1>)->(tensor<i32>,tensor<i32>,!tf_executor.control)
        {T = tensor<i32>, device = ""}
      tf_executor.NextIteration.Sink[%tok] %false : tensor<i32>
        {T = i32, device = ""}
      %exit_count, %ctlExit = tf_executor.Exit %true : tensor<i32>
        {T = i32, device = ""}
      tf_executor.fetch %exit_count : tensor<i32>

I also have a problem executing the previous example - the body of the inner while loop, encoded with the Enter and Exit operations and the myframe frame seems to be executed only once.

I convert the MLIR program above into a problem with the script:
tf-mlir-translate --mlir-to-graphdef try.mlir >try.pbtxt

Then, I load and execute the problem with:

import tensorflow as tf
f = open("try.pbtxt")
txt =
from google.protobuf import text_format
gdef = text_format.Parse(txt, tf.compat.v1.GraphDef())
session = tf.compat.v1.InteractiveSession()
x =["tf.Add:0"])

Printing x after execution gives [31] instead of [0] as I expected. I assume there is a problem with the code or the execution script, but I cannot find it. Please help.


PS: to understand the semantics of frames, I used this document.

That is more a question of TF control flow v1, which is being deprecated at Graph level and folks have been told to not use for ~2 years now. But you can read about those in (which is similar/identical to white paper I just noticed you linked).

We have many tests that generate pbtxt files, I’d start with one of those. Esp with those using function args/returns and specify input/output args. And that use control flow v2 ops rather than the lowered switch/merge form with frames.

This would be a big one especially if you want to query it like you are doing: the name of ops are encoded in the locations, so if you try and fetch a value based on name, you have to provide the names. Also importantly we don’t aim to constrain internal names, so grabbing the value of an op inside loop is possible, but it results in pruning the graph to where it is input/outputs in MLIR (e.g., the graphdef feed/fetched specification mechanism doesn’t transfer to the same in MLIR representation, where we use more explicit args and returns in general).

I suspect the issue is this one: you’re asking for the output of this node which is inside the loop, I’m not sure it is well defined but I suspect it would return the value at the first iteration of the loop.
You need to give a name to another node that is after the exit to get the value after the loop.