Partitioning an MLIR function along dialect lines?

I’m considering writing a code that divides a given MLIR function into multiple functions along dialect lines. For instance in order to separate TF code from a bigger spec and then processing it using existing tools.

Has this been done in other projects? I’m looking for code that could serve as a model or best practices.

Best,
Dumitru

This description looks pretty vague to me. Can you describe better what kind of partitioning you intend to do?

Assume that you have an MLIR function containing operations of the TF dialect and then operations of another dialect, such as function calls (of the std dialect). Today, I think it is impossible to apply the lowering path of TF on such functions (TF->HLO->Linalg->STD/LLVM). The lowering passes refuse to work because they don’t know the operations (even in simple cases where no dialect-specific transformation or analysis is required on these operations, as for the function call).

But if the TF parts can be separated into one or more functions called from another, then integration becomes possible.

Of course, this can be done by hand (like everything), but the idea would be to automate this function scission process. And I was wondering if this has been done before.

We do have such cases and we actually use std calls in that path/some ops get canonicalized to it. What probably doesn’t work is island coarsening and the like, those are intended for a Graph imported to get to TF dialect. But if you had TF dialect with those intertwined then it should work. We even have some tests where we use unregistered ops along with transforms. Now the final TF to HLO conversion is a full one, and so if you have ops that cannot be represented easily in HLO, then it would fail there.

Could you file an issue TF github with reproducer? (If it isn’t one of the above then it is probably something in the bridge to expand).

@jpienaar The TF to HLO conversion (-xla-legalize-tf) has an allow_partial_conversion option.