[RFC] Moving TOSA construction lib into MLIR core


We’d like to propose a progressive move of the TOSA dialect construction utility functions into the MLIR core.

These are currently sitting as identical copies in two frontend places:
TensorFlow: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/mlir/tosa/transforms
legalize_common.* and legalize_utils.*
TosaLegalizeCommon and TosaLegalizeUtils

The code is intended to remain framework invariant. The few places where framework-implemented support functions are used will be replaced with standalone code. The Torch side only has the reduction operator support functions right now, but the rest will progressively be used as new legalizations are added on the Torch side.

There’s the obvious question of why it wasn’t in core to begin with. When originally contributed (~Nov 2020) it was easier to develop and stabilize legalizations from TensorFlow dialects to TOSA while sitting in the framework side close to the legalization pass.

They’ve now reached a point of stability and generality - dynamic shape inference support added by @stellaraccident @rsuderman and team for example - that it’s a good time to consider moving them core side to enable better backend codegen expression, particularly now that there are multiple independent framework paths to TOSA in development.

Value Proposition

With these sitting in core, it enables a common level of service for TOSA legalization paths from multiple frameworks, carrying shape inference and similar capabilities. The related TosaInferShapes pass is already in the core.

E2E paths will likely have frontend exporters emitting TOSA, being consumed by a backend codegen stack. Typically both these will be some form of (separate) out of tree LLVM based construct, but all of them will benefit from common constructors for TOSA callable after their framework-side rewriter has parsed the input op parameters to extract this information.

It is expected that if a framework has a very unique form of high level op construct, it will maintain its own specialized variant of the standard convert*() function in this library.


Since these utility functions get called from the framework side and there is no framework-like dialect in the core that builds TOSA, testability would continue to be framework-driven as it is now.

A similar utility function set is lib/Dialect/Tosa/Utils/QuantUtils.cpp. It does have a simple test pass, which converts between two quantized domains for the same TOSA IR. However, for this RFC, it’s unclear how to implement a test pass in-core, since there’s no dialect to start from whose TOSA legalization construction could invoke these utility functions.

cc: @eric-k @_sean_silva

1 Like

Thanks! That looks like a great addition :slight_smile:

The main challenge will be indeed testability, I’d really like code in the repo to have test coverage (independently of TensorFlow that is).
What can we do to this end?
Maybe a tosa_frontend dialect that would define ops that matches the APIs exposed there and then a conversion pass from this test dialect to TOSA could be used to validate all this?

Yeah, a producer dialect is exactly what I thought might be an option - I just mentally called it tosa_sockpuppet.

But… this adds a collection of maintenance formalities here - now there’s one more dialect to maintain, tests for that to write, and the need to keep that framework dialect contemporaneous with evolving real frontend needs. Then there’s the question of saying ‘this isn’t a real dialect, it’s just pretending to be a frontend’ or something like that.

This really dovetails into the e2e conversation, including the one on the repository code reorg - is this really the only / best way to do this ? There’s generally going to be frontend and backend side compiler stacks that are independent out-of-tree constructs, simply because backends tend to want to consume multiple frontend paths.

Right: so we should expose APIs for such out-of-tree frontend. The way I see it we can expose either:

  • C++ APIs
  • IR construct APIs (builder APIs).

MLIR mostly standardized IR construct using the second option with dialects, and so all the tooling and convention around makes it a non-problem from a testing point of view.
I’m not against exposing C++ APIs if it is best, but the fact that there are out-of-tree frontend using these APIs does not seem a reason to not have tests for these APIs on our side right? I would think that most library or project exposing public APIs would have some sort of tests for these.

I’m strongly in support of having a testability setup in place. The real issue here is they are being tested - just not in core, and the core ‘cannot see it’. Both Tensorflow (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/tosa/tests/tf-to-tosa-pipeline.mlir, https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/tosa/tests/tfl-to-tosa-pipeline.mlir) and Torch (https://github.com/llvm/torch-mlir/blob/main/test/Conversion/TorchToTosa/basic.mlir) have legalization tests that exercise these C++ APIs.

I’m just wondering if this test result or mechanism can be somehow ‘portable’ to support constructs like these involving testability of a C++ API used across two out-of-tree setups, or whether MLIR would want to retain the autonomy to independently test such APIs within itself.

If the latter is strongly preferred then yes I can endeavor to have such a ‘framework dialect’ to drive testing, though it should probably be located in a manner that it doesn’t get confused with an actual framwork for people to start writing networks with…

This is good idea. It’s too bad to have this duplicated.

+1 on seeing these as “builders” from a test perspective. If it’s not a huge problem, actually making them literally be builders (let builders = ...) on the ops would be nice. I think from a testability perspective it’s the same, whether the functions are literally let builders = ... or just a loose header file. (do we usually insist that all builders are tested? somehow I get the feeling that we don’t, but I might be wrong)

We don’t test auto-generated builder, and we are likely not strict about trivially forwarding builders (like to support defaulted or inferred operands from others). I was under the impression that there was a non-trivial amount of logic involved in what is proposed here though?

Right, we already define the test dialect in the test directory in such a way that isn’t linked in libMLIR.so or part of an MLIR installation. The same could apply here I think.

I thought about this, and it’s an interesting construct to view things under. The main problem here is - and I may be mistaken - the builders build that op in that target dialect.

These utility functions legalize the source op, i.e. there’s a convertSoftmaxOp. There’s no softmax in TOSA, just ways to express it. The function in question emits 40+ TOSA ops for quantized bit-accurate softmax expression, and the corresponding normalized exponential function in floating point domain, depending on input type. See: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/tosa/transforms/legalize_common.cc#L1314

So it’s not really a builder for the TOSA side - it’s a helper to build on the source side. It’s just that several source frameworks have largely the same constructs for various ops like these, and there’s a necessity to also express common case behavior during this TOSA construction (like dynamic shape expression) and that’s what these functions do.

Ah, ok. My opinion that has developed over time is that it’s basically not useful to write tests that are simply a translation from b.create<FooOp> in a .cpp to foo.op ... in a .mlir file. These sorts of functions should be tested via correctness tests end-to-end. I think we have enough scaffolding in the integration tests directory to throw something together to verify aspects of the numerics here (at least a cursory check). You would write a test pass that calls these functions to generate some basic test IR (to avoid introducing a dummy dialect just to be able to call these functions), and then pipe it through opt/cpu-runner to get some numbers printed out somehow.

I know that’s pretty annoying, but in my experience the bug-catching value of “this expanded to that” kind of tests is very low, whereas for targeted cases like this, the correctness testing, even cursory, catches most of the gross errors.

Yes that’s essentially the idea around the reference model integration conversation that’s been going on in parallel recently: TOSA reference model from MLIR using EmitC - #12 by mehdi_amini . If I were to integrate the existing model into the core repo it would be some form of cpu-runner entity.

The existing test construct is to have these legalization tests on the source framework side (TF/TFLite and Torch separately). It’s not a pass as such, but a Python script that simply constructs MLIR form of source dialect ops, transforms it to TOSA form and then compares the two for bit accuracy.

Separately, they have the standard ‘check-mlir’ like construction tests listed earlier. These test the construct of legalization and they invoke these functions too.

Yeah someplace within test/ might be a good place to put such a tosa-test-frontend dialect in. I’ll experiment with this and see if it looks reasonable for a proposal. The fact that these functions are intended to remain stable and framework invariant works in favor of this option.

1 Like

@sjarus @eric-k a TF test tfl-to-tosa-pipeline.mlir needs to be updated after kDynamicSize changed in rG9729b6930b41. I was wondering if you can pick that since you planned to upstream this code/test to MLIR, if I understand correctly.

This appears to be a replacement of -1 with std::numeric_limits<int64_t>::min(), if I’m not mistaken ? In that case, I think that can be done in-place in the test.mlir and this task would not materially impact it.

It’s not that simple. There is a lot of C++ code that hard-coded -1. It’s not just about fixing the .mlir files. At the moment I disabled TFL->TOSA test in TF.

Ah I think I understand the implications better here - it sounds like the legalization pass needs to change and the construction library may be involved. Thanks for bringing this up!

1 Like

Yes, we can start looking at it. It may be a bit delayed, as most of us are out next week.