Question from a passer-by

Hi all,

Was wondering what is the real benefit of say translating from LLVM's
RISC-type instruction set to the x86 which is a more CISC type of
design? Especially after emitting said RISC stream from a much
higher-level language like say C or C++? I always thought that to
efficiently translate logic, as much as possible information has to be
retained down the translation chain? Would that not make the opposite
case when first human logic in form of C/C++ is grinded into much
smaller LLVM primitives, which then have to be assembled (translated)
to the a "larger grained" x86 instruction set with more complex
instructions, not to mention the MMX/SIMD subset. How does LLVM solve
the problem of optimizing translation from a set of smaller primitives
into a set of larger primitives?

Thanks.

Was wondering what is the real benefit of say translating from LLVM's
RISC-type instruction set to the x86 which is a more CISC type of
design?

One benefit of translating to LLVM IR is that it makes optimizations
easier to write and more powerful... you might get a better feel for
it by looking at
Static single-assignment form - Wikipedia and some of
the linked articles. Another benefit is that it can provide be used
for front-ends for many different languages.

Especially after emitting said RISC stream from a much
higher-level language like say C or C++?

Despite the fact that X86 is CISC, it's easier to generate optimized
x86 code from a simpler IR... the complexities of x86 and the
complexities of C/C++ are mostly orthogonal.

-Eli