I want to use linalg.TransposeOp(). But I get error like this.
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
1. mlir._mlir_libs._mlir.ir.OpView(operation: object)
Should I pass an operation object to this API? For linalg.transposeop, I think it should invoked with input, init and permutation.
Can you provide the code you are trying? I canāt quite figure it out from the error message and practically would need to fiddle with it as I want involved in this op specifically and mostly I use the python bindings for higher level opsets above linalg.
Thanks. Iām not somewhere that I can run this right now so just working by inspection. I expect the issue may be in the generated constructor: can you provide the full stack trace?
linalg.transpose seems like not a linalg_structured_op. linalg_structured_op need func params are instances of (TensorDef, ScalarDef, IndexAttrDef, UnaryFnAttrDef, BinaryFnAttrDef, TypeFnAttrDef), but permutation is DenseI64ArrayAttr.
Is there a way to express linalg.transpose instead linalg_structured_op? Thank you for your help.
But, thereās an issue when I feed it to the mlir-opt.
<stdin>:4:8: error: 'linalg.transpose' op region #0 ('region') failed to verify constraint: region with 1 blocks
%2 = "linalg.transpose"(%0, %1) <{permutation = array<i64: 1, 0>}> ({
^
It doesnāt seem to be caused by your PR but weāre creating generic mlir operation from python and when the created operation is parsed, it doesnāt satisfy the verifier in this case.
This actually makes me think about the current C-APIās capability.
I think itād be pretty handy if we can tableGen the wrappers for each OP builder in C-API.
And Python binding just need to wrap it.
For example, suppose we can expect mlirLinalgTransposeOpCreate() to be tableGenāed.
Which might hugely enhance the C-API userās experience.
Iām not sure if this in lines with the already discussed idea.
The core IR API is intentionally low-level, e.g. exposes a plain list of operationās operands and attributes without attempting to assign āsemanticā names to them. Users of specific dialects are expected to wrap the core API in a dialect-specific way
This statement might be against my thought above but not 100% sure. Especially confusing because ācore IR APIā, ācore APIā and āMLIR C-APIā are mixed-up.
If theyāre pointing all the same thing, I wonder if the reason made it intentionally low-level is still staying.
See https://github.com/nod-ai/PI/blob/main/cpp_ext/TorchOps.impls.cpp. But after trying it out, I can safely say that this not what you want. You actually want the C API to be small and versatile rather than comprehensive - this way the base distribution stays light and downstream users can codegen against the C API. This is basically what the generated python bindings are (a downstream user that generates bindings to the C API). Today you can do this for some other language by building MLIR from scratch and extending mlir-tblgenor you can use an existing distro of MLIR, which ships with the tds, and dump the tds to json and reinvent some small parts of the wheel (see here).
@makslevental , thanks for confirming the C-API rationale and sharing the pointers in the nod-ai compiler. Yeah, I agree with the basic idea and itās important to keep up with it.
I didnāt think python binding can be a downstream user, so I got an impression that C_API is bottlenecking the interface between the C++ core library and python binding though I totally agree to the principles of C-API and the design of the Python binding sitting on top of the C_API.
Anyway, I donāt have a better idea to challenge the current design, I believe itās the best way.
Sorry @weilinquan for adding too much topic not immediately related to the original problem.
To avoid any confusion, @makslevental 's related commits have been all merged to the upstream and your issues should be all fixed now.