>> In other words, abandoning overflow detection makes the
>> duplication of types redundant, while requiring it would be a
>> great burden on CPUs that don't have overflow exception hardware.
>Yes, you're right. This has been a desired change for quite some time
>now. Unfortunately, its a huge impact to nearly every part of LLVM. We
>will probably do it around the 2.0 time frame when we can afford to
>break bytecode compatibility and generally clean up a lot of other
>things as well.
Uh, does this mean you're contemplating getting rid of llvm's ability
to detect an interger overflow? So if I add, say, two 32-bit signed
ints with values 2000000000 and 2000000000 I'm going to get
-294967296 and have no way to know that something bad happened?
I'm not sure I follow how an overflow equates to "something bad
happened", but ...
As Chris mentioned, I think the plan is to simply not associate the
signedness with the types but with the instructions instead. So, the
plan going forward is to have (using your example):
%sum = sadd i32 2000000000, 2000000000
In the current implementation, we would have:
%sum = add int 2000000000, 2000000000
The differences is that the constants are not signed (i32 is just a
"sign unspecified" 32 bit quantity) and the instruction is signed (sadd
versus add). The sadd instruction will interpret the i32 quantities as
signed values and correspondingly do a signed addition resulting in the
overflow you suggested. If uadd (unsigned add) were to be used, the
quantities would be interpreted as unsigned and an unsigned add would be
done, resulting in the non-overflowed 4000000000 value.
And that is at least *my* understanding of future plans. I don't think
there's any schedule on this, however, its a pretty major change.
That would make me sad. I'm not entirely sure I see the rationale;
isn't it the case that only languages that care to support such
overflow detection would pay the runtime cost?
Exactly. This arrangement doesn't really change what can be supported
but it does get rid of a lot of casts between signed and unsigned
equivalents. By making the instruction determine the interpretation of
its operands, the higher-level language is still able to perform the
same computations albeit slightly differently.
This seems like more of the circularity of the tyranny of C. Hardware
certainly *could* support this transparently; C doesn't care, though; so
hardware doesn't need to support overflow detection.
How are you suggesting we support overflow detection? With an "overflow
register" like on many architectures? I think LLVM's goal here is to
simply express the required computation and leave it to the back end
code generators to deal with useful/fast/best instructions to generate
from the LLVM mid-level IR. If a machine has the ability to detect
overflow then it should also recognize LLVM instruction combinations
that could benefit from that facility.