TCP discussion at the MLIR Open meeting on 4/9

Hi all,

We don’t have any presentation scheduled for the open meeting this Thursday, instead of cancelling I thought it could be a good idea to use this time to discuss about TCP: what we want to achieve and some of the first steps!

@_sean_silva has experimented with some ideas and accepted to share some early thoughts that could drive some interesting discussions here. I’ll let them give more details! Maybe we can seed the discussion here.

Thanks Mehdi.

As folks have seen from recent threads in the TCP-WG category, I’ve recently become somewhat obsessed with bridging the abstraction gap between what tensor-based frontends have (especially “numpy”-like frontends) and what we need to connect with at the lower levels of the compilation stack. I feel like this is a natural setting for analyzing how to design TCP because we have to do that lowering anyway and in doing so it emphasizes how we “peel off” parts of the program to get to the optimizable chunks.

I’ll put together some slides tomorrow and post them here as a preview. And I welcome further discussion in this thread of course. I think we’re actually pretty close to getting something workable, and @stellaraccident and I’s end-to-end (python → execution) numpy prototype (see Numpy/scipy op set - #31 by mehdi_amini) should be a good assessment of the overall viability of the approach.

I’m especially curious: what are folks’ takeaways from the recent TCP-WG posts? Is the a picture starting to crystallize for anybody else?

2 Likes

Here are some slides I put together.

Preliminary title “TCP” Is More Than TCP

1 Like

Thanks for the slides

In the last one I see:

Goal is to have an end2end (Python → Execution) flow built with mainly upstream infra, to exercise the “TCP” design.

Do you think that you will need to extrapolate something from initiatives like:

Thanks for the links. Those look very cool. Those are good prior art, but I suspect we will need something slightly different, such as declaring that a particular dimension is dynamic but does not participate in size-1 broadcasting.

In the nptyping terminology, in addition to something like:

NDArray[(3, 3, typing.Any), float]

We would have something like:

NDArray[(3, 3, typing.NonBroadcasting), float]

I think the exact set of frontend annotations that we need will be discovered by backpropagating from “hard” problems inside the compiler back to “what would solve this, if promised at the frontend level?”. I don’t claim to have the answer yet; we need to build it first.

Do you think that is it solvable inside the perimeter of current PEPs?

I believe so. A lot is expressible within the tools introduced in PEP 484.

If you are interested the original thread in numpy was at Type hinting / annotation (PEP 484) for ndarray, dtype, and ufunc · Issue #7370 · numpy/numpy · GitHub

@_sean_silva See also RFC: TensorFlow Canonical Type System by mdanatg · Pull Request #208 · tensorflow/community · GitHub

Slides and recording are online for those who missed it.