it's for sure an obvious idea to use llvm as IR for heterogenous
platforms (e.g. CPU+GPU).
In theory someone could write plain C/C++ code, which by clang is
translated to llvm-IL and then
together with JIT-backends packed in an appropriate executable. Thus all
and hassles of openCL or CUDA could be avoided. I wonder if there
already is a related project?
Of course in practice this gets much more complicated mainly due to
memory access problems.
But this issues can be adressed later. For the moment I'm just
interested in the reasonability of the general idea.
PS: I strongly disagree with the llvm FAQ entry, where sizeof is used to
reason C and C++ as platform-dependent languages. Indeed sizeof is there
to abstract from platform-specifics. Of course you can (ab)use it (e.g.
specialize a template which take int's as template parameters and then
pass a sizeof result). But in general I think that neither C(++) is
platform-dependent nor the llvm-IR needs to be platform-dependent.
Does your C code ever use the 'long' type? If so, the LLVM IR will be
different depending on whether it's targeting linux-32 or linux-64. Do
you ever use size_t? Same problem. Do you ever use a union containing
both pointers and integers? See above. In principle, it's possible to
write platform-independent IR, or even C code that compiles to
platform-independent IR. In practice, especially if you include any
system headers, it's remarkably hard.
Perhaps you would be interested in joining the LLVM-LibC Wrapper project at http://sourceforge.net/projects/llvmlibc/ . Currently we have stdio.h wrapped in such a way that all structure pointers are converted to byte pointers since their structure is not known until link time and all success/failure return codes were converted to boolean values (i1 in LLVM terminology). Be sure to see our Wiki at http://sourceforge.net/apps/mediawiki/llvmlibc/ to see what changes we've implemented in our wrapper to make LibC accessible to LLVM cross-platform and cross-architecture users.