With the very elaborate, but also very fun revision D102095, my dream of migrating to a first-class citizen sparse tensor type has become a reality!

Some major difference are described below.

(1) The sparse annotations in Linalg ops have been replaced in favor of proper sparse tensor types. So, for example, rather than using an annotation on the operation trait

```
#trait_sum_reduce = {
indexing_maps = [
affine_map<(i,j) -> (i,j)>, // A
affine_map<(i,j) -> ()> // x (out)
],
sparse = [ ;
[ "S", "S" ], // A ; THIS PART
[ ] // x ; IS REMOVED
], ;
iterator_types = ["reduction", "reduction"],
doc = "x += A(i,j)"
}
```

This information is now carried by the sparse tensor type itself:

```
#SparseMatrix = #sparse_tensor.encoding<{
dimLevelType = [ "compressed", "compressed" ]
}>
```

(2) Removal of the secondary storage choices for pointer and index width (viz. `--sparsification="ptr-type=2 ind-type=2"`

), which applied to **all** sparse tensors in favor of a **per-tensor** specification of these widths.

```
#SparseMatrix = #sparse_tensor.encoding<{
dimLevelType = [ "compressed", "compressed" ],
pointerBitWidth = 32,
indexBitWidth = 32
}>
```

(3) Glue and clutter to materialize the sparse tensor storage as opaque pointer into proper tensors is completely gone! So no more:

```
func @kernel_sum_reduce(%argA: !SparseTensor ... ) {
%arga = sparse_tensor.fromPtr %argA : !SparseTensor to tensor<?x?xf64>
%0 = linalg.generic #trait_sum_reduce
ins(%arga: tensor<?x?xf64>)
...
```

But simply:

```
func @kernel_sum_reduce(%arga: tensor<?x?xf64, #SparseMatrix> ...) {
%0 = linalg.generic #trait_sum_reduce
ins(%arga: tensor<?x?xf64, #SparseMatrix>)
...
```

Subsequent bufferization takes care of replacing the types with whatever underlying implementation is selected by the compiler.

Also, setting up a sparse tensor in an integration test sees the biggest improvement to connect with the sparse support library. No longer:

```
%annotations = memref.alloc(%c2) : memref<?xi1>
%sparse = constant true
memref.store %sparse, %annotations[%c0] : memref<?xi1>
memref.store %sparse, %annotations[%c1] : memref<?xi1>
%i64 = constant 1 : index
%f64 = constant 1 : index
%a = call @newSparseTensor(%fileName, %annotations, %i64, %i64, %f64)
: (!Filename, memref<?xi1>, index, index, index) -> (!SparseTensor)
```

But just the following (sic!):

```
%a = sparse_tensor.new %fileName : !Filename to tensor<?x?xf64, #SparseMatrix>
```

(4) More rigorous verification of type consistency of all ops, including the generated sparse primitives that connect the generated sparse code with the support library (note that the latter is simply there for convenience, in the long run even the library could be replaced with codegen).

**Next**: one last revision that removes all obsoleted sparse code form Linalg dialect.