Hey all - I’ve had various discussions with folks using the MLIR C-API to do interesting work in language specific downstreams (i.e. I know of Haskell, Rust and Julia integrations of various types). From talking to folks, I know that there are some common threads of pain points, but given the fragmentation of communities/languages/etc, I feel like we’re missing some critical mass potential. As we look to 2022, I think it would be great if we got more visibility on these things as part of the core project discussions – I personally would find it really valuable to understand better the uses, goals, and approaches being explored.
Would anyone involved in these efforts be interested in grabbing some ODM slots in 2022 and presenting their work/issues/hopes/dreams? I think it might help us as a community understand and flesh out more support for these areas.
Side note: I found the above link to the schedule doc to be really hard to find (I have it on my calendar but was trying to find it via search). The top link for
mlir open design meeting on Google is the MLIR - Talks page, which seems like the right place for it. However, the “Open Design Meetings” link on that page links to a TensorFlow community page, which to my eye, looks woefully out of date and mis-targeted. I did find the agenda doc on there, but I think we should probably just link directly to that (or use a more modern scheduling approach) from the Talks page and retire this TensorFlow-specific site. Any objections to making that change?
It’s on the home page: https://mlir.llvm.org
I wouldn’t mind discussing some of the questions I’ve been curious about recently – however, I find the
ODM meeting slightly intimidating, as I haven’t contributed to the core architecture or interacted deeply with the community for any significant length of time.
One point which I can provide some content on is the usage of MLIR in Julia – I think there’s a number of people who are “MLIR curious” but are intimidated by C++, so there is impetus for usage of the C bindings, and other tools for somehow integrating MLIR-based lowering pipelines (e.g. brutus). I think (unfortunately) that the skill requirements are still very high regardless of what interface “MLIR curious” users use to interact with the system.
Just off the top of my head, there’s some folks curious in an “array IR” project (based around some initial work on loop vectorization), there’s a quantum computer scientist who wants to develop an MLIR-based dialect for programming quantum devices and integrate it as a compiler plugin in Julia (but using MLIR), there’s other folks who are generally interested in potential benefits from inserting an MLIR phase into Julia’s native execution pipeline. Unfortunately there hasn’t been a lot of funding for some of these efforts (through whatever means) – but the interest is still there I think.
Recently, I’ve developed a small project with the intent of understanding modern usage of abstract interpretation in type systems in the past month or so, and I started out with a homemade IR which I expected to use to target MLIR (so like a higher-level dialect which I could use for my interpreter implementation, and then I’d write an MLIR dialect). I performed this work in Rust – and wrote a
sys library for the C bindings. But more generally, I think it would be interesting to think about what partial evaluation / abstract interpretation looks like when embedded into MLIR concepts. That’s partially the intent of my project (which, as I said, is super nascent!) – I would enjoy discussing this, but I would like it to mature a bit more (if that’s okay) before I commit to presenting on something which still has some risk involved.
+1 I think it would be great to use dedicated ODM slots for topics like this - I find it useful to be able to talk through non-obvious things as a group.
But you do probably have among the most experience using it from Julia, so seems great fit for this discussion
The home page - how quaint. The one place I don’t think I ever looked. Doesn’t everyone just utter half formed wishes into Google and expect it to tell us what we want to know?