Hi all,
i’m working on a LLVM back-end right now and i think I found a bug in an optimization pass. When compiling the following code using llvm-gcc (the current 2.5 release) with –O2
int main(int argc, char** argv) {
char* pStr = “I” + (argc > 100);
printf("%d\n", strcmp(pStr, “I”) == 0);
}
the strcmp function is replaced by a 16 bit load and compared against the integer value of ‘I’:
define i32 @main(i32 %argc, i8** nocapture %argv) nounwind {
entry:
%0 = icmp sgt i32 %argc, 100 ; [#uses=1]
%1 = zext i1 %0 to i32 ; [#uses=1]
%2 = getelementptr [2 x i8]* @.str, i32 0, i32 %1 ; <i8*> [#uses=1]
%tmp = bitcast i8* %2 to i16* ; <i16*> [#uses=1]
%lhsv = load i16* %tmp, align 1 ; [#uses=1]
%3 = icmp eq i16 %lhsv, 73 ; [#uses=1]
%4 = zext i1 %3 to i32 ; [#uses=1]
%5 = tail call i32 (i8*, …)* @printf(i8* getelementptr ([4 x i8]* @.str1, i32 0, i32 0), i32 %4) nounwind ; [#uses=0]
ret i32 undef
}
On little endian machines the code works correct but on big endian %lhsv must be compared against 73 << 8.
Kind regards
Timo Stripf