I think this has come up in circt and the LLVM dialect, but not sure if we ever found a solution of broad applicability/ergonomics.
In Torch-MLIR I have ops like:
%4:2 = torch.prim.TupleUnpack %3 : !torch.tuple<!torch.list<!torch.int>, !torch.list<!torch.int>> -> !torch.list<!torch.int>, !torch.list<!torch.int>
or
%3 = torch.aten.conv2d %arg0, %arg1, %arg2, %0, %1, %2, %int1 : !torch.vtensor, !torch.vtensor, !torch.vtensor, !torch.list<!torch.int>, !torch.list<!torch.int>, !torch.list<!torch.int>, !torch.int ->!torch.vtensor
And would like it to print as
%4:2 = torch.prim.TupleUnpack %3 : tuple<list<int>, list<int>> -> list<int>, list<int>
%3 = torch.aten.conv2d %arg0, %arg1, %arg2, %0, %1, %2, %int1 : vtensor, vtensor, vtensor, list<int>, list<int>, list<int>, int -> vtensor
That is, omitting the !torch.
prefix for the types. We have a closed type system in the torch
dialect so this is very ergonomic. I wonder if we could teach OpAsmOpInterface that all types parsed within a region have a dialect prefix omitted? Any other thoughts on a solution to this?
One added request: we have types like !torch.vtensor<[5,3],f32>
where f32
is a builtin type used for element types, so having a way for the ValueTensorType parser to indicate that the dtype should be parsed with “builtin” dialect would help too, if the !torch
dialect is the default prefix otherwise.