high bit of function address set incorrectly?

Hi,

I recently updated to current llvm svn and fixed-up the minor compiler
errors I encountered.
However, at run-time even my Hello world programs crash with a
segmentation fault.

A concrete program that crashes on my Linux x86_64 Fedora box is:

declare void @__ot_runtime_print_int(i8*, i32)

define void @main() {
entry:
  call void @__ot_runtime_print_int(i8* null, i32 12)
  br label %return

return: ; preds = %entry
  ret void
}

LLVM magic turns this into the assember shown in the attached image.

I am no expert but is seems to me that the address generated
(0x8000012c1eea) for the __ot_runtime_print_int function is incorrect.
As both nm and the debugger (kdbg) suggest that the address of the
function in question is: 0x12c1eea.

So why is the high bit of the function address set? Anybody willing to
shed some light on what is happening here?

Thanks,
Maurice

disassemble.png

Are you running in the JIT? If not, what assembler are you using?
This looks like an issue with encoding the relative PC distance
between the call site and the call entry.

Notice that the call site is at a very high address (0x7fffff....)
while the target is very low. x86_64 can't encode a full 64-bit
immediate offset in the call instruction, so depending on how you are
generating this code either you or some code you depend on needs to be
jumping through some extra hoops to make this work.

Reid

Hi Reid,

Are you running in the JIT? If not, what assembler are you using?
This looks like an issue with encoding the relative PC distance
between the call site and the call entry.

Yes, I am running in the JIT. The call site is a jitted function while
the function called is not jitted but linked into the executable.
This all worked fine until a recent update to current svn.

Notice that the call site is at a very high address (0x7fffff....)
while the target is very low. x86_64 can't encode a full 64-bit
immediate offset in the call instruction, so depending on how you are
generating this code either you or some code you depend on needs to be
jumping through some extra hoops to make this work.

I am a quite novice LLVM person, so I simply setup the LLVM
datastructures and let it do the magic.

Can you point me to where in the LLVM code the encoding of the call
offset is determined?
It seems something must have changed in this area to make it suddenly
stop working.

Thanks.

Kind regards,
Maurice

Any hints?

Maurice

Hi,

The llvm::ExecutionEngine::createJIT has a 'CodeModel' enum parameter
taking values: JITDefault, Small, Kernel, Medium or Large.

An appropriate value for this parameter fixes the my problem.

Kind regards,
Maurice