Sparse tensor lowering to GPU

Has anyone tried to target GPUs with sparse tensors?

1 Like

Looks like @aartbik has been working on this in stealth mode :partying_face:

1 Like

Ha! It was not my intention to operate in stealth mode. I had merely overlooked this question.
But yes, expect a lot more progress on this front really soon!

1 Like

Really looking forward to this!

I’ve tried multiplication of the extremely sparse tensors both with PyTorch and TensorFlow on RTX4090 and they are much slower than dense.

So basically GPU is no help for sparse matrix multiplications?

That seems a rather quick conclusion :wink: even though, indeed, accelerating general sparse computations on GPU can be very challenging. What sparse frameworks or addons did you use for PyTorch and Tensorflow experiments?