Thinking about "whacky" backends

I've been tossing around some ideas about high-level backends.

Say, have LLVM emit Perl code.

Sounds whacky but isn't. It's good for the first bootstrapping phase in environments where you don't have a C compiler, where you don't have a cross-compiled binary for download, but you can execute Perl.
It also makes a great inspect-the-sources-with-an-editor stage for aspiring compiler writers.

Or emit JVM bytecode, or maybe for the upcoming Parrot VM that the Perl community is building.

The questions I'm having is:
1. Is this really a useful approach?
2. How much work would such a backend be?

Regards,
Jo

Sorry, forgot to CC the list.

What benefit do you get from having a backend here rather than an interpreter for LLVM IR?

Cameron

<snip>

Now my idea for a whacky backend: Just a wrapper of the bitcode writer with its
own special target triple: bitcode-tarrget-neutral and a generic data layout
that aligns to single bytes as a placeholder only. It should disallow
overriding the alignment of individual instructions to avoid illegal settings
for the data layout. When compiling it with LLC, it should require that the
target triple and data layout be overridden by a real processor and OS. This
would allow LLVM to actually function as a statically compiled virtual machine
when used in conjunction with my wrapper of the LibC runtimes. Of course the
wrapper code would allow special inlining as it would be the only interface to
the underlying OS.

What do you think of my whacky backend idea?

This is pretty much what's happening with Portable Native Client, right?

http://www.chromium.org/nativeclient/pnacl

See also the first presentation from the November LLVM meeting: http://llvm.org/devmtg/2010-11/

-Henry

Hello Henry,

Yes, it is slowly happening there too. But their double-sandbox on the browser method vs. my simple wrapper method makes things a bit different. I don't work for Google and I'd like to see browsers take a less prominent role. I've seen the video and, in interest, I joined the NaCl mailing list. Almost nothing is happening with the PNaCl end of things on the NaCl mailing list. I'm either on the wrong list or else they're keeping things hush hush.

--Sam

PNaCl fixes data layout to be just "portable enough" to cover x86,
ARM, and x86_64, IIUC. The size of a pointer in PNaCl is always 32
bits, for example. It would still be useful to be able to generate
target neutral bitcode that doesn't need a special runtime and can
interface with regular native libraries.

We've talked about this before, but just thinking about it a bit
further, here are some examples of features people have asked for to
help make their frontends more target-neutral:
- a pointer-sized integer type
- unions
- bitfields

My understanding is that these are rejected because the primary
consumers of the IR are the optimizers and the code generators. They
don't want to have to deal with these extra features complicating the
IR. They prefer to see bitcasts and logical operators over unions and
bitfields.

An alternative is to introduce these features, but add a target
lowering phase that strips them from the IR and replaces them with the
target-dependent equivalent. The downside here is that it undermines
the "single IR" LLVM approach. OTOH, it's a lot like running mem2reg
before doing optimizations.

Reid

Once again, forgot to CC the list.

A backend that's self-sufficient and covers the entire Unixoid world.
That cuts down on the number of binaries that one needs to provide for autoinstallers and such.

Generated Perl could be used to bootstrap an LLVM IR interpreter, for example.

Regards,
Jo

Cameron Zwarich wrote:

What benefit do you get from having a backend here rather than an interpreter for LLVM IR?

The same thing as an interpreter, just a native build (no need for an interpreter program, better speed, etc).

This would be beneficial anywhere that "build once, deploy anywhere" functionality is desired, without resorting to using a higher-level language like C# or Java.
Granted, the application that either interprets or compiles and links the resulting bitcode application would still be required on the system, much like a VM for this language; but for developers not familiar with such languages and either no particular desire or very little time to become familiar with them, this would be an excellent solution.

That said, it seems like it ought to be possible to do the same thing by emitting bitcode for all supported platform/arch combinations and compressing them in an archive, then decompressing and either interpreting or JIT-compiling the appropriate bitcode for the platform. This would just be a more flexible means to that same end.

Hi Nate,

I've successfully ported one bitcode from Linux to Mac to Windows. All were x86 and the program was text-based, but I'd say my LLVM Wrapper would be worth some effort in the future if I could just get some help. Currently it just wraps StdIO.h with its own functions.

Here's some of what it would take to make portable bitcodes in C or LLVM Assembly:

* Convert all preprocessor conditionals to regular conditionals so that both the #ifdef parts and the #else parts make it into the bitcode. Don't worry about bloat since the installer will be able to run constant folding and dead-code elimination to get rid of the unused parts. The constants will be supplied by the installer in the form of an external bitcode for linkage. The only exception to this rule would be the guard-defines in .h files.

* Make sure that the code in the distribution bitcode is endian-agnostic 32-bit code. A sandbox will be needed on 64-bit systems or else a separate 64-bit bitcode package will still be required.

* Make sure all external dependencies including LibC runtimes are accessed from the wrapper rather than the native code from the bitcode. All external libraries are system-specific.

There are probably other requirements that I've either overlooked or forgotten but I'd like to have my wrapper expanded to do some more extensive functionality if somebody would like to join the project, let me know and I'll add you to the SourceForge project at LLVM Wrapper download | SourceForge.net .

--Sam

Hi Nate,

I've successfully ported one bitcode from Linux to Mac to Windows. All were x86 and the program was text-based, but I'd say my LLVM Wrapper would be worth some effort in the future if I could just get some help. Currently it just wraps StdIO.h with its own functions

Naturally that would work perfectly fine on a similar architecture and common dependencies.
I'm a hobbyist game developer, so that is my primary concern. Even using a cross-platform game library like Irrlicht or Ogre, there can be problems with using the same bitcode for each platform. Especially in cases where you may have to implement system-specific code to cover cases not provided for by such libraries (wrappers for MessageBox/NSRunAlertPanel, Clipboard/Pasteboard, etc)

* Convert all preprocessor conditionals to regular conditionals so that both the #ifdef parts and the #else parts make it into the bitcode. Don't worry about bloat since the installer will be able to run constant folding and dead-code elimination to get rid of the unused parts. The constants will be supplied by the installer in the form of an external bitcode for linkage. The only exception to this rule would be the guard-defines in .h files.

I'm a huge fan of the C preprocessor, and this just seems like a terrible idea.
Better idea would be to generate some sort of metadata from these #ifdef's instead.

* Make sure that the code in the distribution bitcode is endian-agnostic 32-bit code. A sandbox will be needed on 64-bit systems or else a separate 64-bit bitcode package will still be required.

Endian agnosticism without using the preprocessor just seems... burdensome.
However, if I'm grabbing the correct information here, you're saying that bitcode for x86 Linux would function on 32-bit ARM, PPC, and SPARC Linux (if endian-agnostic)?

* Make sure all external dependencies including LibC runtimes are accessed from the wrapper rather than the native code from the bitcode. All external libraries are system-specific.

The meaning here being that I need all dependencies to be part of the bitcode, as opposed to being natively linked later? This simply isn't practical for games.

Hello again Nate,

Fixing preprocessor macros is simple:

#ifdef __windows
// windows code here
#else
// other code here
#endif

becomes

extern const bool isWindows; // note: I'm not sure if const is legal here but it should be
if (isWindows) {
//windows code here
} else {
//other code here
}

And isWindows will be defined in the wrapper to allow the external symbol to be resolved at final link time. The point here is that if only the Windows code was in the main bitcode, there would be no way to get the other code to link to the other OSs since the preprocessor would have killed the other code before it even got compiled. The same technique can be applied to endianness switching.

To make a cross-platform library such as Irrlicht work on more than one platform, Irrlicht would need a wrapper as well. This is turning into a lot of work now, as you can see. This is why I'm asking for help. The rewards are great however. It means that you can compile once on Windows and install that bitcode on Windows, Linux, Mac, FreeBSD, etc. If you followed the non-preprocessor if statements technique for your endianness swapping, it should work on PPC, Intel, ARM, SPARC and any other LLVM-supported 32-bit processor. Only the wrapper would need to be system-specific.

Does this make sense? Please keep writing if you have any additional questions! I think we're on a roll here.

--Sam

Have you looked at Emscripten? (https://github.com/kripken/emscripten)

"Emscripten is an LLVM-to-JavaScript compiler. It takes LLVM bitcode -
which can be generated from C/C++, using llvm-gcc or clang, or any
other language that can be converted into LLVM - and compiles that
into JavaScript, which can be run on the web (or anywhere else
JavaScript can run)."

You may be able to estimate the amount of work needed based on the
development effort required by Emscripten.

Csaba

That said, it seems like it ought to be possible to do the same thing
by emitting bitcode for all supported platform/arch combinations

Wait... is bitcode not platform-agnostic?
I thought it is.

> and

compressing them in an archive, then decompressing and either
interpreting or JIT-compiling the appropriate bitcode for the
platform. This would just be a more flexible means to that same end.

Not sure how that is more flexible - care to elaborate?

Regards,
Jo

C/C++/ObjectiveC bitcode is not platform-agnostic. It's possible to make it platform-agnostic with a wrapper for LibC as I have started to do. The tough part will be the C++ runtimes.

Reid Kleckner <reid.kleckner@gmail.com> writes:

An alternative is to introduce these features, but add a target
lowering phase that strips them from the IR and replaces them with the
target-dependent equivalent.

I like this approach a lot.

The downside here is that it undermines the "single IR" LLVM approach.

LLVM has never had a single IR as long as I've worked on it. It's
always had at least five:

- LLVM IR
- SCEV
- Selection DAG
- Schedule DAG
- Machine IR

This is good and appropriate.

                              -Dave

Samuel Crow <samuraileumas@yahoo.com> writes:

Here's some of what it would take to make portable bitcodes in C or LLVM Assembly:

A look at the work done on ANDF in the 90's may be helpful. I've only
skimmed it but there's been some deep thinking about stuff like this.

                             -Dave

compressing them in an archive, then decompressing and either
interpreting or JIT-compiling the appropriate bitcode for the
platform. This would just be a more flexible means to that same end.

Not sure how that is more flexible - care to elaborate?

More flexible to the programmer, not to the system. There are many
pieces of code where, to port between Windows, OS X, iOS, Android, and
PC Linux/*BSD/etc would require a ton of preprocessor work.

Ah I see.

I'd avoid using control flow, unless the differences are really minimal and the control flow is easy to understand. For anything that's getting even slightly complicated, I'd use different bitcode files.

Indeed. It's my attempt at meeting halfway, and it really offers nothing over using preprocessors besides the fact that it would result in a single all-functional bitcode without requiring distinct run-time checks.

Whether these are packed into a single zip file or available for separate download is something that should be decided by the deployer - sometimes, space is at a premium, sometimes, it's bandwidth or latency. Let them decide what parts of the bitcode file tree they want to distribute, and allow them to package them into zip files as they see fit.
There are transparent zip filesystems that will allow you to access a file inside a zip archive as if it were part of the normal file system. (Java does this all the time, and this part of the Java infrastructure works really well.)

I was suggesting a method of storage, not necessarily of distribution.
Indeed, the best method for a distribution system would be to transmit only the relevant bitcode.

Thanks Dave,

A little bit of Google reveals a wikipedia article: Architecture Neutral Distribution Format - Wikipedia that links to TenDRA Distribution Format - Wikipedia which has its specification listed at http://docs.tendra.org/reference/xhtml/guide which speaks in lofty abstract language terms and doesn't appear to be a binary representation of code. I'm not sure if any of this actually helps us considering that LLVM already has mechanisms for most of these functions.

Here we go again,

--Sam