only one letter got to valgrind-developers mailing list. I'll quote
the first message of the thread so that those who do not read llvmdev
knew what's this discusssion about.
=== Begin of the first message ===
I've seen on the LLVM's Open Projet Page  an idea about using LLVM to
generate native code in Valgrind. For what I know, Valgrind uses libVEX
to translate native instructions into a bitcode, used to add the
instrumentation and then translated back to native code for execution.
Valgrind and LLVM are two tools that I use nearly every day. I'm also
very interested in code generation and optimization, so adding the
possibility to use LLVM to generate native code in libVEX interests me
very much. Is it a good idea? Could a LLVM backend bring something
useful to Valgrind (for instance, faster execution or more targets
I've sent this message to the LLVM and Valgrind mailing lists because
I've originally found the idea on the LLVM's website, but Valgrind is
the object of the idea. By the way, does anyone already know if LLVM or
Valgrind will be a mentoring organization for this year's GSoC?
You can find in  the list of my past projects. During the GSoC 2011,
I had the chance to use the Clang libraries to compile C code, and the
LLVM JIT to execute it (with instrumented stdlib functions). I have also
played with the LLVM C bindings to generate code when I explored some
parts of Mesa.
 : http://llvm.org/OpenProjects.html#misc_new
 : http://steckdenis.be/page-projects.html
=== End of the first message ===
The idea of using LLVM backend in some dynamic binary translation (DBT)
project has became popular recently. Unfortunately it does not prove
to be good.
I suggest you check the related work in QEMU. DBT part of both QEMU and
Valgrind works in similar way. And there were a bunch of works on using
LLVM as a QEMU backend. They resulted in slowdown mostly. In  the
authors reported 35x slowdown, in  there were around 2x slowdown.
Finally in  the authors reported performance gain, but there are some
1. They used LLVM not only for backend. They replaced internal
representation with LLVM. This is not an option for Valgrind because
you'll need to rewrite all existing tools (including third party ones)
to do it.
2. They use SPEC CPU benchmarks to measure their speedup. Important
things about these tests is that they have little code to translate but
a lot of computations to do with translated code. And even some of these
tests are not doing too well (like 403.gcc). On real life applications
(like firefox) where there are a lot of library code to translate and
not so much computations to do results may be totally different.
LLVM is not doing good as a DBT backend mostly for two reasons.
First, in DBT you need to translate while you are running the
application. You need to do it really fast. Compiler is not optimized
for that task. LLVM JIT? May be.
Second, in DBT you translate code in small portions like basic blocks,
or extended basic blocks. They have very simple structure. There is no
loops, there is no redundancy from translation high level language to
low level. There is nothing good sophisticated optimizations can do
better then very simple ones.
In conclusion I second what have already been said: this project sounds
like fun to do, but do not expect much practical results from it.
It would also be interesting to cache the LLVM-generated code
The tricky part here is to build matching between binary code fragments
and cached translations from previous runs. In worst case all you know
about the binary code is it's address (which can vary between runs) and
the binary code itself.
 : "Dynamically Translating x86 to LLVM using QEMU"
 : llvm-qemu project.
 : "LnQ: Building High Performance Dynamic Binary Translator
with Existing Compiler Backends"