Luke Kenneth Casson Leighton wrote:
The way in which Gallium3D targets LLVM, is that it waits until it receives
the shader program from the application, then compiles that down to LLVM IR.
That's too late to start synthesizing hardware (unless you're planning to
ship an FPGA as the graphics card, in which case reprogramming is still too
slow, and it'll be too expensive).
nick.... the Zynq-7000 series of Dual-Core Cortex A9 800mhz 28nm CPUs
have an on-board Series 7 Artix-7 or Kinect-7 FPGA (depending on the
Zynq range). and that's on the same silicon IC
so, price is not
an object any more [assuming reasonable volume].
here's the eetimes article which mentions that the low-end version of
the 7000 series will be under $USD 15 in mass-volume:
http://www.eetimes.com/electronics-news/4213637/Xilinx-provides-first-product-details-for-EPP-ARM-based-devices
so - does that change things at all? 
No, because that doesn't have:
- nearly enough gates. Recall that a modern GPU has more gates than a modern CPU, so you're orders of magnitude away.
- quite enough I/O bandwidth. Assuming off-chip TMDS/LVDS (sensible, given that neither the ARM core nor the FPGA have a high enough clock rate), the limiting I/O bandwidth is between the GPU and its video memory. That product claims it can do DDR3, which is not quite the same as GDDR5.
You could try to trade-off between FPGA and the ARM core, but the ARM is only running at 800MHz. Maybe it's possible to get competitive performance, but it doesn't sound like a promising start.
i assumed that it would be possible to push other sections of the
gallium3d code through the LLVM wringer (so to speak). not just the
shader program. i've seen papers for example - someone won 3rd prize
from a competition by xilinx, he was a Seoul University student,
managed to implement parts of OpenGL ES 1.1 in an FPGA, by porting
MesaGL to it. got fair performance, too. i always wondered what
happened to his code, and if he would be required to comply with the
GPL / LGPL...
Wow, that must've been a lot of work. This is what I was alluding to in my second paragraph of the previous email, except that I didn't realize someone had actually done it.
Of course, OpenGL ES 1.1 is still fixed-function hardware. That's a much easier problem, and not useful beyond current-gen cell-phones.
anyway, yes: what's possible, and where can people find out more
about how gallium3d uses LLVM?
Ask the Mesa/Gallium folks, we really just get the occasional bug report. Personally, I follow zrusin.blogspot.com for my Gallium3d news.
and (for those people not familiar
with 3D), why is the shader program not "static" i.e. why is a
compiler needed at runtime at _all_? (if there's an answer already
somewhere on a wiki, already, that'd be great).
and: would moving this compiler onto the FPGA (so that it interprets
the shader program), making it an interpreter instead, be a viable
option? just throwing ideas out, here.
I'm sure there's many ways to produce an open graphics chip and card, and I'm sure there's even ways to use LLVM to simplify the hardware design, and maybe even the drivers. The point I'm trying to make is that This Stuff Is Really Hard and there's no prepackaged answer that us folks on a mailing list are going to be able to give you based on a list of products. It's certainly not a situation of "push a button" and compile a new GPU. There's a lot of avenues to try, and I don't know enough to tell you which one to pursue.
If you have questions about LLVM itself, we'll happily do our best to answer them!
Nick