MLIR literals, or type inference for operands

Previously, we discussed type inference here:

As a result, we added an optional hook operations can implement to enable inferring result types. I’m working on a DSL embedded in Swift, and have recently been trying to improve the ergonomics of literals, which are effectively context-free “constant” operations. I’m wondering whether we can use an approach similar to result type inference to make literal MLIR values simpler.

For a concrete example, let’s assume we are trying to generate MLIR for the statement foo & 42, in which foo is an MLIR value, & represents the comb.and operation (from circt), and 42 is an integer literal. comb.and requires that all operands are of the same type, so we can infer the type of 42 to be whatever the type of foo is. As an additional complication, if we were to create an rtl.constant operation we would need to specify which block to add that operation to, which is also implied by foo.

The way I handle this currently, is I have a sum type (called Port for reasons) which can be either an MlirValue or a literal. The Swift code that creates the comb.and operation is then responsible for taking a sequence of Ports and and promoting literals to their respective types. This works OK, but because I’m recreating some of the semantics of the comb.and operation in Swift, this creates a considerable maintainability burden and other language bindings will have to write their own versions of this logic. For operations like comb.and this could even be done automatically because I believe it has a trait saying all its operands must be the same type.

I’m not sure if there is a way to handle this in the MLIR infrastructure, but if we could make it work I think it would be valuable. Specifically, we could introduce a “value or literal” type (and support some blessed set of literals; integers and strings come to mind) and some mechanism to add an operation to a block with a specified list of value-or-literal operands. The operation would define a hook that takes an argument list of value-or-literal operands and creates a set of constant-like operations followed by an instance of itself (or fails, similar to how result type inference works currently).

Update: A simpler alternative might be to create a (operands: [ValueOrLiteral], results: [Optional<Type>]) -> (operandTypes: [Type], resultTypes: [Type]) hook which can be manually run before creating the operation. Then it would be the responsibility of the bindings to create constant operations for the literal operands, but they at least would not need to write the logic for inferring the type.

Aside: @clattner and @jdd were discussing rtl.array_index (which indexes into a fixed-size array) and the tension between the fact that the bit width of the index type is clog2(<array size>), clog2(1) == 0, and that the RTL integer type does not support bit widths of zero. One potential solution to this problem is changing the requirement to max(clog2(x), 1), but then all code that created array_index operations would need to be updated. If we allowed operations to define a hook inferring their operand types, we could extend the printed form of MLIR to support literals and be able to write something like %foo = rtl.array_index %array, 1: rtl.array<i1x7>, ? with the ? implying that that argument should be treated as a literal.

I’m assuming you have seen llvm-project/ at 5a8d5a2859d9bb056083b343588a2d87622e76a2 · llvm/llvm-project · GitHub which is static method to determine output type given input operands, op attributes and regions (if implemented you also get a builder generated which doesn’t require result types).

Promoting literals to types, feel very much on the language side. I don’t see why ops in the IR should be doing that vs a language frontend. And to your example, yes you need to specify where to create the constant and no it is not generally implied by foo, seems like for your application it may be. But for both performance or correctness reasons one might either want it at the top of the isolated from above parent op, or top of region of foo or just before foo (I can’t think why one would want it after). I think you could achieve what you want here with helper functions rather than needing an IR construct. The IR should be direct, we should not need folks to think about name resolutions, implicit conversions or some such. I think that requires keeping too much state in head.

This sounds fine at DSL level, in IR level, you should know your operand types which you get as input.

I’m not sure why you have index with bitwidth of clog2(array_size) vs a vector<array_size x i1> (or vector<array_size x bit>), the latter would seem to enable more reuse :man_shrugging: For your example, why not just have %foo = rtl.array_index %array, 1: rtl.array<i1x7> is it as 1 may optionally be a literal? In which case that could be handled by having this be a variadic op with optional attribute and pretty printer.

I had not! I was creating custom builders for when the result type could be inferred. Nice!

I made tis issue in Circt a little while back, feel free to use it or ignore: Support for inferring result types · Issue #706 · llvm/circt · GitHub

I’ll admit I don’t yet have a fully formed intuition as to what belongs on the “language side” versus the “IR side”. A middle ground might be to only do operand type inference (and leaving the creation of constant-like operations up to the frontend). This solution might look like generalizing the result type inference hook to be (operands: [Optional<Type>], results: [Optional<Type>]) -> (operands: [Type], results: [Type]). I don’t see a compelling reason to treat results as special here.

I agree, except that right now we can provide some static guarantee using traits and other mechanisms (custom builders) which makes the API fairly intuitive. An array of optional types for operands and results on creation makes it for a more “open” contract with the operation creation mechanism. So, I’m not trying to say it can’t be done, but it likely requires careful consideration and design to make it a good API (“easy to use, hard to misuse”, easy mental model, etc.).

1 Like