Problems with optimizations and va_arg intrinsic

Hi all,

I am currently writing an obfuscation pass (on LLVM 3.3 code base) that works at IR level and that implement procedures fusion. For this, I am heavily
relying on the va_arg intrinsic. My code is functional when compiled with no optimization (-O0), but I experience strange behavior
when compiling with -O2 or -O3. I am currently using the libgmp and OpenSSL libraries test suites.

I know that clang is *not* relying on the va_arg intrinsic, but implements itself this set of function, hence my question: is the va_arg intrinsic
something we can trust to work with, or not?

Thank you for any input on this topic !

Cheers,

Rinaldini Julien

Hi Rinaldini,

I know that clang is *not* relying on the va_arg intrinsic, but implements itself this set of function, hence my question: is the va_arg intrinsic
something we can trust to work with, or not?

Probably not, Someone seems to have made an attempt to get something
working in the x86 backend, but unfortunately it doesn't have all the
information needed to obey the ABI properly (e.g. for over-aligned
struct types, LLVM just doesn't know that the type itself is
over-aligned). I certainly wouldn't want to rely on it.

Cheers.

Tim.

Hi !

Thanks for your answer… I’ll look at how clang implement the va_arg instruction in IR to see how it works !

Cheers