Hi,
While experimenting with LLVM as a gcc-drop-in I noticed some
interesting behavior related to integer overflow. Consider the
following test program:
#include <iostream>
int main() {
int x = 0;
for(int i = 0; i < 123456789; i++) {
x += i;
}
std::cout << x << std::endl;
return 0;
}
Compiled using the latest llvm-g++ (2.1, mac os x universal tarball)
with no optimization, and under all levels of optimization under Apple
GCC 4.0.1 the program correctly outputs "1206807378"
However, when using llvm-g++ under any level of optimization
(-O1/2/3), it outputs instead "-940676270" (which is the above answer
minus 2^31)
Now, is this is proper behavior, undefined behavior, bug, etc? I can't
quite tell, so perhaps someone can clarify? Thanks