I am developing hardware generation tool which converts C/C++ application to hardware which is not a Pass but a separate project. I am going to use LLVM as front-end optimization tool. I am curious how I can invoke the LLVM front-end optimziations and then, get the pointer of LLVM IR data generated by the front-end. I would like to perform some optimziations on the IR. Can anybody please direct me to some example codes or documentation about that?
I have done something like this, i am focusing translating LLVM IR to
Verilog HDL, but not care about C/C++ source code at this moment.
I implement this as a llvm backend as Kunkel said.
At the current state, i write my own schedule DAGNode classes, which
allow me to schedule multiple Nodes to the same cycle, perform
resource binding and register allocation.
And i am planning to implement a generic LLVM target machine, so i can
and first translate LLVM IR to LLVM machine code, to leverage existing
low level llvm optimizations. and then i will translate these LLVM
machine code to my own schedule DAGNodes.
and you could also have a look at this: http://legup.org/, this is a
open source high level synthesis project(not my project).