Centralized place for "smart, fast, correct-enough parallel computation stuff"

Hey folks, I feel like we are “missing a project” in the MLIR ecosystem.

Today I was just looking at Torch-MLIR's RNGs result in highly correlated random tensors · Issue #1608 · llvm/torch-mlir · GitHub which results from frontend people (us, on Torch-MLIR :slight_smile: ) just grabbing an off-the-shelf RNG algorithm and using it without really thinking about the implications or performance honestly. IIRC this code originally was copied from the MHLO->Linalg lowering of a similar op. (I’ll mention that at least Torch-MLIR uses an actual varying seed, whereas MHLO->Linalg uses a fixed seed: obligatory xkcd).

It would be great if we had a repo where we could all work together to build “batteries included” high-performance and numerically/mathematically thought-through-enough abstractions. Besides all the usual matmul/reduction work in the ecosystem, we could probably put sparse there, and various benchmarked, validated strategies for RNG’s, FFT, embeddings, etc.

I mean, we can fix this locally in Torch-MLIR, but it really doesn’t feel like “our job” as a frontend project to be digging into deep mathematical/statistical properties of RNG’s and committing that into our repo (when it should be shared across the ecosystem). We have a number of known-suboptimal lowerings in a similar vein that really need to be owned outside the frontend as well.

Can we create a “smart, fast, correct-enough parallel computation stuff” project somehow?

5 Likes