I’ve tried to fix this, but it looks to me like there is no straightforward solution. When looking at the addition case, for example, the problem is that LLVM and MLIR assume i8 to be a signless integer. For signless integers, the addition operations is the same for signed and unsigned integers in two’s complement arithmetic except when overflow occurs.

I can think of a few ways to solve this situation:

Find the most conservative bounds for the operation, that is, find the bounds where it is impossible for the operation to overflow regardless of the integer being signed or unsigned, and disable folding there.

Find the least conservative bounds for the operation, that is, find the bounds where overflow would happen regardless of the integer being signed or unsigned, and disable folding there. This could arguably provide a balance between performance and correctness.

Add operations for signed and unsigned addition to arith. Note that there currently are divsi and divui in arith too.

Another solution?

Does anyone here have a suggestion on what is the best solution?

I am not an expert at this, but I expect this may be related to the effort to align llvm and arith with respect to poison semantics (and supporting the nsw flag from the language reference: LLVM Language Reference Manual — LLVM 18.0.0git documentation).

I don’t quite follow the “except”: 2 complement should work through overflow. What would you except as a difference?

In C/C++, signed overflow is undefined behavior, but 1) that’s unrelated to the IR and 2) that means the current behavior would also be a valid behavior for C/C++ semantics.

Separate operations (perhaps in another dialect?) for “checked arithmetic” would seem to be the right solution for modeling overflow checking (e.g. for C# checked arithmetic or GCC checked builtins ), because the issue impacts more than just constant folding. E.g., checked addition is not associative.

Since i8 is a signless integer, we should in effect be handling both signed and unsigned integers. If we assume the constants to be unsigned integers, then the range is 0 to 255. With this, the result of addition is 130. For the signed integers, however, the range is -128 to 127. Then, the result of the addition is an overflow since 130 is bigger than 127.

I’m not saying that this is a problem in general. I understand that signless integers as introduced by LLVM 2.0 have many benefits. As I understand, the idea is that the sign doesn’t matter for many operations, so the compilation can be simplified by handling the signs much later in the process. The problem here is that MLIR currently destroys information during folding which cannot be recovered later.

EDIT: For people interested in this, the difference in two’s complement arithmetic addition for signed and unsigned integers is discussed here (archive).

I think the confusion might be here that arith.adids documentation is underspecified in this case. See [RFC] Define precise arith semantics e.g.
Is it possible that you are expecting some particular semantics for signed integer overflow?

The overflow semantics of addi are currently implemented as wraparound semantics (regardless of which sign you interpret it as). This means our desired result for your example in particular is -126 if interpreted as a signed number due to signed integer wraparound.
-126 happens to be the exact same bit pattern as 130 when using twos complement (10000010) making them equivalent. MLIR therefore treats them as the exact same integer attribute.
E.g. CSE will turn:

So is as you said it is “handling both signed and unsigned integers” (the result is 130 if interpreted as unsigned, -126 due to signed integer wraparound if interpreted as signed integers).

That is back to what I wrote before: you may seem to expect a C/C++ standard behavior (where the wrap around overflow behavior is unfortunately conflated with the sign), which arith dialect does not need to follow, so we adopt the equivalent of `-fwrapvˋ gcc mode.
Moreover that does not answer « what do you expect? » what result would be better for the value here? Replacing with an unreachable and delete the block? If we let this go as-is to LLVM you would get the same folding at the LLVM level because we’d lower to an add with the same semantics!

Good point! I didn’t understand that the output might be printed as

%c-126_i8 = arith.constant -126 : i8

but that doesn’t automatically mean that the value is an signed int. It’s just the representation that MLIR chose for the signless value.

Well, I’m probably misunderstanding here, but I would expect the compiler (both MLIR and LLVM) to prefer correctness. In this case, I would expect (again, I’m probably wrong) MLIR and LLVM to either throw an error (ftrapv basically) or avoid folding so that the implementation of the addition function can decide what to do during runtime.

Thanks to you mentioning -fwrapv, I did read the ISO standard working group C11/3.4.3 about “undefined behavior”. It specifies that integer overflow is an example of undefined behavior and that overflow basically is caused by “use of a non-portable or erroneous program construct or of erroneous data”.

I guess that’s the answer I was looking for. In line with the ISO standard: overflows occur because the input was erroneous, so you should avoid it because the compiler provides no guarantees about what happens next.

Sure but “correctness” has to refer to a spec. The “signed integer can’t overflow” expectation isn’t universal I believe (and “unsigned integer wrap around” isn’t either).
If I believe [this table]https://en.wikipedia.org/wiki/Integer_overflow#Detection, Java would wrap around on signed integer overflow.

Similarly seems like C# or Matlab don’t wrap around but treat signed and unsigned integer the same way.

This is only an answer in the context of C/C++.
This is not directly what happens in MLIR, we don’t have to follow this standard.
Our addition can very well be defined with 2-complement overflow semantics (like in LLVM without the nsw flag).

Thanks for your clarification again. I was watching some manufacturing video and had to think back to the discussion here. Many applications would suffer severely from overflows, right? For example, CNC drill positioning or bank transfers. The current “silent” wraparound would therefore make MLIR unsuitable for any such applications? Or is MLIR only aimed at applications where computational efficiency is more important than accuracy, and the overflow won’t affect the results in a meaningful way?

In general, 2s complement behavior on overflow is well-defined and well-understood. Keep in mind that the arith dialect is typically a component at a very low level of your complete software stack. You can build a (source language) frontend with alternative semantics on top both wraparound or UB/trap semantics; if you want an example, see how llvm::APInt builds on top of the integer semantics from C/C++.

A question for you would be: why would this affect signed more than unsigned?

Another question for you may be: considering the semantics of overflow in C/C++ (both signed and unsigned), are these languages unsuitable for CNC and bank transfers?

One further thought could be: what is the overflow semantics of X86 ISA (signed and unsigned) and how does it make it usable or not for such applications?

Let’s be clear that we’re talking about the numerical semantics of the arith dialect, not MLIR itself.