How to build linalg.TransposeOp in mlir pybind?

I want to use linalg.TransposeOp(). But I get error like this.

TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
    1. mlir._mlir_libs._mlir.ir.OpView(operation: object)

Should I pass an operation object to this API? For linalg.transposeop, I think it should invoked with input, init and permutation.

Can you provide the code you are trying? I canā€™t quite figure it out from the error message and practically would need to fiddle with it as I want involved in this op specifically and mostly I use the python bindings for higher level opsets above linalg.

My code is this.

output_shape = [13, 1]
tensor_type = ir.RankedTensorType.get(output_shape, ir.IntegerType.get_signless(1))
      op1 = tensor.EmptyOp([1, 13], ir.IntegerType.get_signless(1))
      op2 = tensor.EmptyOp([13, 1], ir.IntegerType.get_signless(1))
      op = linalg.TransposeOp(tensor_type, op1.result, op2.result, ir._denseI64ArrayAttr([1, 0], ctx))
      print(op)

Thank you for your help!

Thanks. Iā€™m not somewhere that I can run this right now so just working by inspection. I expect the issue may be in the generated constructor: can you provide the full stack trace?

Traceback (most recent call last):
  File "/buddy-mlir/examples/MLIRLlama/buddy/test.py", line 48, in <module>
    op = linalg.TransposeOp(tensor_type, op1.result, op2.result, ir._denseI64ArrayAttr([1, 0], ctx))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
    1. mlir._mlir_libs._mlir.ir.OpView(operation: object)

Invoked with: Type(tensor<13x1xi1>), <mlir._mlir_libs._mlir.ir.OpResult object at 0x7f00345a02b0>, <mlir._mlir_libs._mlir.ir.OpResult object at 0x7f001c54f430>, Attribute(array<i64: 1, 0>)

I happened to look at pybinding these days and this was a good example for me to take a look into more details.

I think the issue is,

As in the comment, current code doesnā€™t cover TransposeOpā€™s operands (e.g., input, init instead of inputs, outputs which are supported).

I experimented with this,

diff --git a/mlir/python/mlir/dialects/_linalg_ops_ext.py b/mlir/python/mlir/dialects/_linalg_ops_ext.py
index 3f6d854ca3e2..a3473696359d 100644
--- a/mlir/python/mlir/dialects/_linalg_ops_ext.py
+++ b/mlir/python/mlir/dialects/_linalg_ops_ext.py
@@ -34,6 +34,20 @@ class StructuredOpMixin:
             )
         )

+class StructuredOpMixinB:
+    """All structured ops use the same mixin class."""
+
+    def __init__(self, input, init, result=(), loc=None, ip=None):
+        super().__init__(
+            self.build_generic(
+                results=[result],
+                operands=[input, init],
+                loc=loc,
+                ip=ip,
+            )
+        )
+
+

 def select_opview_mixin(parent_opview_cls):
     # TODO: This shouldn't be a heuristic: we should have a way to annotate
@@ -45,3 +59,10 @@ def select_opview_mixin(parent_opview_cls):
         and hasattr(parent_opview_cls, "result_tensors")
     ):
         return StructuredOpMixin
+    if (
+        "__init__" not in parent_opview_cls.__dict__
+        and hasattr(parent_opview_cls, "input")
+        and hasattr(parent_opview_cls, "init")
+    ):
+        return StructuredOpMixinB
+

and it accepts this IR

op = linalg.TransposeOp(input=op1.result, init=op2.result, result=RankedTensorType.get((13, 1), f32))

But this is not a proper fix since there might be operations with other operands.

see [mlir][python] remove mixins by makslevental Ā· Pull Request #68853 Ā· llvm/llvm-project Ā· GitHub

linalg.transpose seems like not a linalg_structured_op. linalg_structured_op need func params are instances of (TensorDef, ScalarDef, IndexAttrDef, UnaryFnAttrDef, BinaryFnAttrDef, TypeFnAttrDef), but permutation is DenseI64ArrayAttr.

Is there a way to express linalg.transpose instead linalg_structured_op? Thank you for your help.

It seems to be missing param of permutation. Is there a better way to implement this? Thank you for your help.

@weilinquan
Yes, I believe above PR also enables the use of the attribute.
But Iā€™m not sure if everything is fine for the transpose op.

@makslevental
Thanks for pointing out your PR!
Iā€™ve tried it and can create linalg.transpose with operands and attribute

op1 = tensor.EmptyOp([1, 13], f32)
op2 = tensor.EmptyOp([13, 1], f32)
op3 = linalg.TransposeOp(input=op1, init=op2, result=[RankedTensorType.get((13, 1), f32)], permutation=[1,0])

This gives

  %0 = "tensor.empty"() : () -> tensor<1x13xf32>
  %1 = "tensor.empty"() : () -> tensor<13x1xf32>
  %2 = "linalg.transpose"(%0, %1) <{permutation = array<i64: 1, 0>}> ({
  }) : (tensor<1x13xf32>, tensor<13x1xf32>) -> tensor<13x1xf32>

But, thereā€™s an issue when I feed it to the mlir-opt.

<stdin>:4:8: error: 'linalg.transpose' op region #0 ('region') failed to verify constraint: region with 1 blocks
  %2 = "linalg.transpose"(%0, %1) <{permutation = array<i64: 1, 0>}> ({
       ^

It doesnā€™t seem to be caused by your PR but weā€™re creating generic mlir operation from python and when the created operation is parsed, it doesnā€™t satisfy the verifier in this case.

1 Like

Thank you for your help ! I think the problem is my llvm version is old. In my llvm version, it doesnā€™t have this PR.

Thereā€™s a bug here I think (still tracking it down): TransposeOp : LinalgStructuredBase_Op has no RegionBuilder because TransposeOp::build calls buildIdentityRegion directly (rather than going through the linalg DSL/generated builders). Iā€™ll track this down soon.

see [mlir][linalg] regionBuilder for transpose, broadcast by makslevental Ā· Pull Request #69742 Ā· llvm/llvm-project Ā· GitHub for the fix.

2 Likes

This actually makes me think about the current C-APIā€™s capability.

I think itā€™d be pretty handy if we can tableGen the wrappers for each OP builder in C-API.
And Python binding just need to wrap it.
For example, suppose we can expect mlirLinalgTransposeOpCreate() to be tableGenā€™ed.
Which might hugely enhance the C-API userā€™s experience.

Iā€™m not sure if this in lines with the already discussed idea.

OK,
Just found Iā€™ve overlooked the early part of the doc. MLIR C API - MLIR (llvm.org)

The core IR API is intentionally low-level, e.g. exposes a plain list of operationā€™s operands and attributes without attempting to assign ā€œsemanticā€ names to them. Users of specific dialects are expected to wrap the core API in a dialect-specific way

This statement might be against my thought above but not 100% sure. Especially confusing because ā€˜core IR APIā€™, ā€˜core APIā€™ and ā€˜MLIR C-APIā€™ are mixed-up.
If theyā€™re pointing all the same thing, I wonder if the reason made it intentionally low-level is still staying.

See https://github.com/nod-ai/PI/blob/main/cpp_ext/TorchOps.impls.cpp. But after trying it out, I can safely say that this not what you want. You actually want the C API to be small and versatile rather than comprehensive - this way the base distribution stays light and downstream users can codegen against the C API. This is basically what the generated python bindings are (a downstream user that generates bindings to the C API). Today you can do this for some other language by building MLIR from scratch and extending mlir-tblgen or you can use an existing distro of MLIR, which ships with the tds, and dump the tds to json and reinvent some small parts of the wheel (see here).

@makslevental , thanks for confirming the C-API rationale and sharing the pointers in the nod-ai compiler. Yeah, I agree with the basic idea and itā€™s important to keep up with it.

I didnā€™t think python binding can be a downstream user, so I got an impression that C_API is bottlenecking the interface between the C++ core library and python binding though I totally agree to the principles of C-API and the design of the Python binding sitting on top of the C_API.
Anyway, I donā€™t have a better idea to challenge the current design, I believe itā€™s the best way.

Sorry @weilinquan for adding too much topic not immediately related to the original problem.
To avoid any confusion, @makslevental 's related commits have been all merged to the upstream and your issues should be all fixed now.

1 Like

Thank you for your help!

Thank you for your answer! It makes sense!