A small question: How do I ensure memory alignment? I want all malloced
memory, globals and functions to be 4-byte aligned. Does llvm have any
In the medium term, we plan to add alignment requirements to the
alloca/malloc instructions and to globals (vars/functions) but we do not
have this yet. Currently the code generator defines the alignment of
various structures based on the preferred alignment (usually specified by
the platform ABI). If you have a 32-bit object (like an int or a float),
you can be pretty certain that the object is going to be four byte aligned
(for global vars, allocas, and mallocs). Functions SHOULD be at least four
byte aligned: if they are not, please file a bug and we'll get it fixed
I'm currently implementing a small scheme toy-compiler, and want to use
the lowest 2 bits for type tags. It's Currently 380 lines of
scheme-code, quite similar to the compiler in SICP, which I hope to
get self-applicable later on.
Cool! I'm currently out of town so I can't try it out, but I will when I
get a chance. This is sounds like a neat project!
Can I change the calling conventions / frame handling, so that call frames
are allocated on the heap instead of on the stack? Right now all my
compiled functions take an environment as an argument to lookup variables
in the scheme-function. It would perhaps be nicer if I could use the call
frames instead, but I can't since lambdas in it can escape when the
frame is popped of the stack, for example:
I think that taking an environment pointer is the best way to go. The
semantics of LLVM are supposed to match that of a microprocessor, so if
you want custom semantics for calls (allocating the frame on the stack),
they should be implemented explicitly in the LLVM code. If the code
generated by the result is not good, please let us know and we can tune
the code generator or potentially add a new domain-specific optimization.
One important thing that we don't have (but which will be added when there
is interest) is support for explicitly marked tail calls. Currently there
is support for tail call *optimizations* (e.g., turning a naive pow into a
loop), but no way for a front-end to guarantee that it happens. We will
eventually allow the front-end to mark a call as being a tail call, but
noone has implemented this yet (it shouldn't be hard). In any case, the
optimizer is pretty agressive about eliminating tail calls, so you
probably won't run into problems except for obsurd situations.
Writing a scheme front-end for LLVM sounds like a great project: please
keep us informed how it goes, and when it gets mostly functional, let us
know so we can add a link on the web site.