Linalg.fill and bufferization

Hello, Dears!

I try to use bufferization passes on my MLIR and during my experiments I faced an issue with a segfault after linalg-bufferize pass. The research on the root cause shown me that the reason is the appearance of linalg.fill op in my IR. When I remove that everything seems to be ok.

I composed a small test from Linalg test suite. This is input for mlir-opt:

`func @depthwise_conv_2d_input_nhwc_filter_hwcf_tensor(%input: tensor<2x4x5x2xf32>, %filter: tensor<2x2x2x3xf32>) → tensor<2x3x4x2x3xf32> {
%zero = constant 0.000000e+00 : f32
%init = linalg.init_tensor [2, 3, 4, 2, 3] : tensor<2x3x4x2x3xf32>
%fill = linalg.fill(%init, %zero) : tensor<2x3x4x2x3xf32>, f32 → tensor<2x3x4x2x3xf32>

%0 = linalg.depthwise_conv_2d_input_nhwc_filter_hwcf { strides = dense<1> : tensor<2xi64> } ins(%input, %filter : tensor<2x4x5x2xf32>, tensor<2x2x2x3xf32>) outs(%fill : tensor<2x3x4x2x3xf32>) → tensor<2x3x4x2x3xf32>
return %0 : tensor<2x3x4x2x3xf32>
}`

When I run mlir-opt --linalg-bufferize I’ve got this backtrace:
bool mlir::Attribute::isa() const [U = mlir::DenseIntElementsAttr]: Assertionimpl && “isa<> used on a null attribute.”’ failed.`

My further investigation showed me that this pass is executed for linalg.fill and during its processing it invokes the function

static LogicalResult allocateBuffersForResults(Location loc, LinalgOp linalgOp, linalg::GenericOpAdaptor &adaptor, SmallVectorImpl<Value> &resultBuffers, OpBuilder &b)
This function tries to get outputs of linalg.fill op using linalg::GenericOpAdaptor by attribute operand_segment_sizes which doesn’t exist for this op. And finally I’ve got dereference of null pointer with appropriate circumstances.

So, my question is what should I do with linalg.fill op before I apply this pass?

I have a fix for that in ⚙ D98671 [mlir] Add linalg.fill bufferization conversion, but I’m not quite sure it’s the right one @nicolasvasilache

Wow! Great! I a bit looked on those changes. It seems this patch is what I am expecting for.

Thanks! Looking forward when this patch is merged!