Unknown types in operations

Hi,

Is is possible to represent in MLIR dialect of an operation acting on an unknown type?
For example, say the operation is “increase A by 1”
That operation can equally apply to pointers or integers.
On top of that, one might therefore wish to have functions who’s parameters could be:
Function name: “increase”
Input parameter can be ptr or int.
Output parameter can be ptr or int depending on what the input param was.
I would wish to describe that as a single function, and not have to split it into two functions, one with ptr, one with int as the parameter.

Other example could be different sized matrix, tensors, or qubit, but the same idea as above, just different types.

Kind Regards

James

Operations can work on any combination of types as long as their verifier supports that. For example, most operations in the arithmetic dialect work on scalars, vectors and tensors. Note that you must specify the type for each particular instance of the operation.

Functions from the function dialect have a fixed type and do not support any polymorphism intentionally. However, given that functions are just operations in MLIR, you can define your own function-like operation in your dialect that would support polymorphic calls by relaxing the verification of type match between the function-call operation and the function-definition operation according to your needs. It is up to you then to implement the lowering from those polymorphic functions to non-polymorphic equivalents should you need to convert them to the core dialect.

But you could define your own Type IntOrPtr or a Any type (of course if your ops allow it, so you’d keep it monomorphic at this level/polymorphism is handled fully by you). And then as Alex mentioned it is up to you to provide the lowerings/monomorphizations required to compile it (well int or ptr could be trivial unless you want the pointer case to correspond to the value being pointed to, else your interpretation of a lowered pointer could be C like and be an integer).

This you could do with tensor<* x f32> today. You’d probably want to rank specialize for codegen purposes.

1 Like