Target independency using "opaque"? How to do it else?

Hello all,

I'm writing a backend for our scriptlanguage compiler and I'm currently
writing an IR module for the runtime library that contains some support
routines called by generated code.

The IR module contains calls to "malloc", which depend on the size of
"size_t". Since I don't know the target when writing the IR module for the
runtime library, I thought about using an "opaque" type instance in place of
"size_t". When loading the IR module, I would refine the "opaque" to either
i64 or i32, depending on which target I'm using.

For example I currently have

    ; these opaque types are replaced at load time by codegen::RuntimeLib
    %sizet_ty = type opaque
    %intptrt_ty = type opaque

    ; ... then in a function I do:
    %sizeof_value_ty = ptrtoint %value_ty* getelementptr (%value_ty* null,
i32 1) to i32
   %numBytesToAlloc = mul i32 %num, %sizeof_value_ty
   %numBytesSizeT = bitcast i32 %numBytesToAlloc to %sizet_ty
   %memory = call i8* @malloc(%sizet_ty %numBytesSizeT)

However, it always fails to compile this li-file to bitcode at the bitcast
instruction, and says

   error: invalid cast opcode for cast from 'i32' to 'opaque'

It appears one cannot use "opaque" in this way, as some kind of
"placeholder"? What can I do else to achieve my goal? Thanks for any
support!

Hi Johannes,

I'm writing a backend for our scriptlanguage compiler and I'm currently
writing an IR module for the runtime library that contains some support
routines called by generated code.

The IR module contains calls to "malloc", which depend on the size of
"size_t". Since I don't know the target when writing the IR module for the
runtime library, I thought about using an "opaque" type instance in place of
"size_t". When loading the IR module, I would refine the "opaque" to either
i64 or i32, depending on which target I'm using.

you can inject a declaration for your own malloc function that always takes a
64 bit argument into the bitcode:
   declare i8 *@litb_malloc(i64)
When you know how big a pointer is on the target you can provide an appropriate
body. For example on a 32 bit machine you could inject:

   define i8 *@litb_malloc(i64 %s) {
     %t = trunc i64 %s to i32
     %m = call i8 *@malloc(i32 %t)
     ret i8 *%m
   }

Ciao, Duncan.

Duncan Sands wrote:

Hi Johannes,

I'm writing a backend for our scriptlanguage compiler and I'm currently
writing an IR module for the runtime library that contains some support
routines called by generated code.

The IR module contains calls to "malloc", which depend on the size of
"size_t". Since I don't know the target when writing the IR module for
the runtime library, I thought about using an "opaque" type instance in
place of "size_t". When loading the IR module, I would refine the
"opaque" to either i64 or i32, depending on which target I'm using.

you can inject a declaration for your own malloc function that always
takes a 64 bit argument into the bitcode:
   declare i8 *@litb_malloc(i64)
When you know how big a pointer is on the target you can provide an
appropriate
body. For example on a 32 bit machine you could inject:

   define i8 *@litb_malloc(i64 %s) {
     %t = trunc i64 %s to i32
     %m = call i8 *@malloc(i32 %t)
     ret i8 *%m
   }

Ciao, Duncan.

Thanks, that makes a lot of sense. And it's small enough to not be a pain in
the ass to inject it programmatically. I'm also using "opaque" in other
places, and I wonder whether that's correct:

%envpad_ty = type opaque
%env_ty = type {
   ; current save-point
   %savepoint_ty*,
   ; target specific padding
   %envpad_ty
}

The above represents the global environment of my scripting runtime, of
which I only access the first member directly from the IR code. I want to
substitute "envpad_ty" with the appropriate array-of-char to fill the size
of one env_ty. Is this the correct way to tackle that?