Convolution Depthwise NCHW in Linalg dialect

Hello all!
I want to ask a question about Convolution Depthwise Op in Linalg dialect. It can be seen that Linalg dialect contains Conv2D Op for both I/O layouts: NCHW (linalg::Conv2DNchwFchwOp) and NHWC (linalg::Conv2DNhwcHwcfOp). But as for DepthwiseConv2D Op, it is presented for NHWC layout only (linalg::DepthwiseConv2DNhwcXXXXOp).
Why there is no DepthwiseConv2D Op for NCHW I/O layout in Linalg dialect?
If conversion of DepthwiseConv2D Op is implemented using linalg::MatvecOp, it is convinient to use NCHW for kenel and matvec result, and this can be the additional argument to support DepthwiseConv2D-NCHW in Linalg.

Hi, thanks for asking - I don’t know about this specific case, but I know that the others were demand driven and I expect the answer for this one is just that no one has needed it enough to add it.

These are defined in a little DSL here: llvm-project/core_named_ops.py at ae67984ca6d89c7ccdbdca258cd05c151d8b6431 · llvm/llvm-project · GitHub

And then you run this generated script in your build dir to update the op definitions that get checked in and built: llvm-project/update_core_linalg_named_ops.sh.in at main · llvm/llvm-project · GitHub

Dear Stella,
Thank you for the answer!
Yes, I know about this ‘hand-made’ way to add it locally by myself, but is it possible to include it to MLIR Linalg dialect?

I don’t see a problem with adding it to the set of named ops available by default, it should be very easy with the DSL Stella mentioned. One caveat is that we will want to layer the named ops separately as e.g. a new dialect in the future but adding an extra op is fine for now.

Can you send a PR ?

Is it suitable if I send you text representation for this new op, not PR?

Could you add these new ops to core_named_ops.py, please?

@linalg_structured_op
def depthwise_conv_2d_nchw_chw(I=TensorDef(T1, S.N, S.IC, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW),
K=TensorDef(T2, S.IC, S.KH, S.KW),
O=TensorDef(U, S.N, S.IC, S.OH, S.OW, output=True),
strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]),
dilations=IndexAttrDef(S.DH,S.DW,default=[1, 1])):
“”“Performs depth-wise 2-D convolution.
Numeric casting is performed on the operands to the inner multiply, promoting
them to the same data type as the accumulator/output.
“””
implements(ConvolutionOpInterface)
domain(D.n, D.ic, D.oh, D.ow, D.kh, D.kw)
O[D.n, D.ic, D.oh, D.ow] += TypeFn.cast_signed(
U, I[D.n, D.ic, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW]) * TypeFn.cast_signed(U, K[D.ic, D.kh, D.kw])

@linalg_structured_op
def depthwise_conv_2d_nchw_mchw(I=TensorDef(T1, S.N, S.IC, S.OH * S.SH + S.KH * S.DH, S.OW * S.SW + S.KW * S.DW),
K=TensorDef(T2, S.CM, S.IC, S.KH, S.KW),
O=TensorDef(U, S.N, S.CM, S.IC, S.OH, S.OW, output=True),
strides=IndexAttrDef(S.SH, S.SW, default=[1, 1]),
dilations=IndexAttrDef(S.DH,S.DW,default=[1, 1])):
“”“Performs depth-wise 2-D convolution.
Numeric casting is performed on the operands to the inner multiply, promoting
them to the same data type as the accumulator/output.
“””
implements(ConvolutionOpInterface)
domain(D.n, D.cm, D.ic, D.oh, D.ow, D.kh, D.kw)
O[D.n, D.cm, D.ic, D.oh, D.ow] += TypeFn.cast_signed(
U, I[D.n, D.ic, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW]) * TypeFn.cast_signed(U, K[D.cm, D.ic, D.kh, D.kw])

Hi there – I would refer you to the LLVM Developer Policy with respect to the process of submitting patches to LLVM (this is an LLVM project, not a Google project, fwiw). Just posting a chunk of code in a discourse post is not a typical way to make changes to LLVM, and there isn’t anyone here who has the expectation on them to accept such a thing. While we generally try to be helpful, we just can’t handle/scale random code snippets being given to developers in forum messages. Typically, a code change will also require testing changes and other things that need to be dealt with in the review process.

Now, Nicolas/someone may decide to help with something coming in in this way, but I’d recommend sending a patch. If you can’t do that, then filing an issue may be a good way to document the need, but the project runs on volunteers and I don’t know how it will be prioritized.

Thank you for the note/suggestion. We would love to accept the patch if you can put it through regular channels.

Okay, thank you, Stella, I got it.

I apologize for the delay, we had a long weekend in Europe and your message slipped through the cracks.
For this specific issue, I expect that it is a good intro task and I recommend sending a PR with a test.

Okay, thank you for the answer, I will follow your suggestion.