Quite the opposite, this is a very good question!
In a previous posting, I showed that we need something like {linalg.inplaceable = true}
(lacking more advanced bufferization optimizations) on a dense output in cases with sparse inputs just to keep an updating kernel’s complexity O(nnz) vs. O(n). For sparse outputs (something still under development but coming real soon), we will either have to go the route of only accepting building new sparse tensors “from scratch” or assigning even broader semantics to the inplaceable annotation. I am still actively developing the sparse output implementation and would love to hear if people have strong opinions on what direction to take here.