Hi,
during a reverse engineering challenge I used clang/llvm optimizations to minimiz some code and I found some strange behavior that I can’t reproduce with GCC or CL (visual studio compiler).
The C code function contains some code that operates on a data array and an supplied password from ARGV. The compiled binary works as long as I don’t activate any optimizations. When I activate the optimization (>= -O1) then the code will be optimized into some constants which sounds great at the beginning but this is not right. I can reproduce this with clang 3.9 and 4.0. GCC 5.4 and VS CL >=2015 do not show this behavior.
.text:0000000000400570 ; __int64 __fastcall DecryptBlock(unsigned __int8 *)
.text:0000000000400570 public DecryptBlock(unsigned char *)
.text:0000000000400570 DecryptBlock(unsigned char *) proc near ; CODE XREF: main+5p
.text:0000000000400570 mov cs:byte_60106F, 54h
.text:0000000000400577 mov cs:byte_60106E, 0CDh
.text:000000000040057E mov cs:byte_60106D, 0BFh
.text:0000000000400585 mov cs:byte_60106C, 1Bh
.text:000000000040058C mov cs:byte_60106B, 0E4h
.text:0000000000400593 mov cs:byte_60106A, 28h
.text:000000000040059A mov cs:byte_601069, 56h
.text:00000000004005A1 mov cs:byte_601068, 0ACh
.text:00000000004005A8 mov rax, 0F61EA263E1103088h
.text:00000000004005B2 mov cs:Plaintext, rax
.text:00000000004005B9 retn
.text:00000000004005B9 DecryptBlock(unsigned char *) endp
Any idea if this is a bug or why clang does show this behavior ?
Thanks,
Peter Garba
I’ve attached the sample code to the mail. Please ignore the comments and the style of the code
main.cpp (4.58 KB)