linalg.generic doesn’t appear to allow scalar output. Attempting to do a simple vector dot product gives the error 'linalg.generic' op expected the number of results (1) to be equal to the number of output tensors (0).
I found the code comment in LinalgInterfaces.cpp which states:
// Expect at least one output operand.
// This means an op that constructs a tensor out of indices cannot be a
// LinalgOp at the moment. For now this will have to be a special op until we
// have output shape operands that are not tensors.
I know linalg.dot exists, but that doesn’t cover my real use case, which is using non-standard semirings for matrix multiplication (ex. add the intersecting values, then reduce by taking the minimum = min_plus semiring). Ultimately, I’m working towards implementing GraphBLAS using the spare_tensor dialect’s lowering of linalg.generic.
Are there technical challenges that make it difficult for linalg.generic to do a full reduction or does it simply require someone willing to add the necessary code and tests? linalg.generic is truly amazing, especially with the sparse support added by @aartbik, so I’m hoping this current restriction isn’t permanent.
This is perfect. I forgot that zero-degree tensors exist. I kept trying with outs(%argx: f32).
Having the sparse_tensor dialect “scalarize” the result is exactly the final output I want. The sparse_tensor lowering of linalg.generic continues to amaze me! I know it has been a huge effort getting to this point. Thank you for all your hard work.