Reactive specification in dialects ```tf```, ```tf_executor```, ```tosa```. Is this the good place to ask questions?

Hello,

I’d like to better understand the semantics/expressiveness of the native TensorFlow reactive specification layer. This includes some operations of the tf and tosa dialects (such as tf.Placeholder) but mostly operations of the tf_executor dialect.

Here are two examples of questions we’d like to ask:

  • Is it possible to place a tf.Placeholder operation inside a frame enclosed by tf_executor.Enter and tf_executor.Exit? In other terms, is it possible to have multiple tf.Placeholder operations working at different paces? For instance, use tf_executor.Switch to have some tf.Placeholder operations work in odd cycles, while others work in even cycles? My question has of course a dual sense: “what is the intended semantics” and “what can the implementation flows do, e.g. during implementation over iree or tfrt”?
  • Why is tf.Placeholder a part of dialect tf, and not of dialect tf_executor? The overall organization seems to separate the computational part in tf and the control/reactive part in tf_executor. For instance, tf_executor will define the sources and sinks related to the state. But not the primary data source tf.Placeholder. Is this due to historical reasons? The fact that the placeholder also became part of the tosa dialect makes me doubt it (tosa.cond_if and tosa.while_loop do not pose the same problems, as they are mere control structures, not involving interaction with the environment).

I’m not sure this is the right place to ask, hence my subject line.

Dumitru

We have a point of spec feedback for clarification on why that op exists in tosa. I suspect that it was part of some initial modeling/conversion work which becomes redundant in an MLIR realization. I would not consider it at all related to your questions about tf_executor (which I believe was just kind of doing the best it could to model some questionable structures of the original GraphDef design).

1 Like

I have been wondering about the Placeholder very early in TOSA, but I don’t even remember why/how it was justified at the time, it seems redundant with function API to me.

For TensorFlow, Placeholders are useful in V1 graph where you have a flat soup of nodes, and you will provide some inputs later. They aren’t useful / needed when you have a function abstraction as far as I know, since the function arguments are playing this role already.

They could have been placed in the tf_executor dialect, however they weren’t because they act as normal nodes in the graph and don’t have a particular execution semantics. The genesis of the tf_executor dialect was to model specifically ops that have a very specific runtime semantics: they are not just “normal nodes” in a TensorFlow graphs but they have hardcoded handling in the runtime.

1 Like

This is correct - placeholder and identityn were just removed from TOSA in the past few days.

4 Likes