I posted a list of MLIR sparse compiler “starter” projects (ranging from simple to hard) on Bugzilla under the SparseTensor dialect component. This list is hopefully in particular useful for applications to the Outreachy internship that want to contribute to this project. But of course, everyone is invited to help out.
I plan to keep this list current as I think of new starter projects. Please talk to me first, though, if you want to work on one, so I can coordinate better between interested contributors.
Hey! I’m very new to the llvm / mlir codebase but with a little guidance, I’d love to contribute to the project
Please browse through the list to see if you find something particularly interesting and then drop me an email so we can coordinate!
And just to be clear, drop me an email simply to coordinate ownership of the bugs and avoid conflicting and/or duplicate assignments. After that, we love to see any design brainstorming here on discourse, so that everyone is aware of the progress and can chime in on the discussion!
I have commented my thoughts about implementing benchmarks here 52308 – Implement sparse kernel benchmarks (moderate level, independent, starter). Let me know what you think of it.
I took the liberty to copy and paste your comment below. I think inline comments are a bit more inviting in this forum than links
LLVM uses google benchmark library (https://github.com/google/benchmark) for microbenchmarks. The example use I looked at in particular is libc (https://github.com/llvm/llvm-project/tree/main/libc/benchmarks). We could start in the same way for MLIR by having a benchmarks directory in it and setting up Google Benchmark for it similar to this setup (https://github.com/llvm/llvm-project/blob/main/libc/benchmarks/CMakeLists.txt).
Additionally, there is an external repo (GitHub - llvm/llvm-test-suite) for llvm test suites which contains microbenchmarks, some example applications to test/benchmark, and external suites like SPEC (which are not included in the test-suite repo). I don’t think this model of having a separate repo for tests is relevant to us for the purposes of this ticket but wanted to mention it.
Later, we would want to integrate MLIR benchmarks to Buildbot using these instructions (How To Add Your Build Configuration To LLVM Buildbot Infrastructure — LLVM 16.0.0git documentation) which you mentioned in 3rd point of your numbered list.
Please let me know your thoughts or if you want more investigation before getting started here.
For sure, I should have done this in the first place.
In the bugzilla link, you outlined three steps:
The entry requests adding such benchmarks to MLIR, which requires
(1) investigating the typical way in which LLVM at large integrates benchmarks
(2) find interesting sparse kernels to implement and measure
(3) integrate such tests in a continuous build (or at least frequently run system)
Do you think we should move on to the step 2 if google/benchmark is the right framework for doing this?
Sounds good to me, although of course we need the usual bike-shedding on where to put benchmarks in the MLIR source tree I think the best course of action right now is to make an initial revision that starts the benchmark, and perhaps post a benchmarking RFC here on discourse (as a new thread) that links to that proposal revision. Adding benchmarks is a pretty big step, and I am sure people want to chime in (but perhaps they are less inclined to read this “sparse compiler starter projects” thread). Sorry for the extra steps, but we need to tread carefully. Once the initial stuff is out of the way, we can go full speed with the fun stuff!
Sounds great! I will create a revision in a few days hopefully.
The link in this posting did not age well, since we migrated from Bugzilla to GitHub shortly thereafter.
The new link is GitHub issues labeled with mlir:sparse.