GSoC 2009: An LLVM Backend for TPA

Hi all,
I’d like to write an LLVM backend for TPA processor architecture as part of 2009’s Google Summer of Code program.

TPA processor is a new architecture that offers the possibility of high instruction-level parallelism with energy efficiency. TPA architecture partitions the work between the compiler and the hardware differently from RISC and CISC architectures. TPA architecture has three different features that require new complier support. First, the compiler is responsible for partitioning the program into a sequence of structured blocks, which logically execute atomically. Secondly, the compiler should encode instruction dependences explicitly, eliminating the need for the hardware to discover dependences dynamically. The instructions within each block communicate directly, instead of communicating through shared registers. The TPA architecture could impose restrictions on its blocks to simplify the microarchitecture. Third, the compiler should statically schedule the instruction for the dynamic issuing model.

Based on the required of the TPA architecture, the project has three main tasks: Fisrt, writing an LLVM backend could produce TPA machine code. Secondly, we’ll optimize the method of producing the TPA blocks so that the blocks are full of useful instructions while enforcing the architecture constraints. Third, we’ll optimize the instruction scheduler so that instructions in the block can execute with minimum cost.

Besides the basic LLVM backend for TPA processor, two passes are crucial for our research work. One is function pass called Hyperblock_Formation, which partition several basic blocks into a TPA block. We’ll use a collection of heuristics, such as critical path ordering, load balancing, data cache locality, register output and so on. The other is a block pass called instruction block, which statically maps the instructions to the processor. To implement the instruction scheduler, we’ll use another set of heuristics, such as placement for locality, contention, effective window size, etc.

As for my personal background, I’m a graduate student at University of Science and Technology of China with a strong background in compiler theory. My major is computer architecture, the mainly job of my team is to research a new architecture that could take full advantage of hardware resource to speed up the programs. Recently, we have proposed data-like driven TPA architecture, so we should implement a compiler for it. I choose LLVM as an auxiliary framework. I believe this project will be useful to our research and we’ll give a TPA target backend to the LLVM community.

Hi all,
I'd like to write an LLVM backend for TPA processor architecture as part of 2009's Google Summer of Code program.

Who is using the TPA processor? Is this generally useful, or specific to your research program?

-Chris

I'm not certain, but it sounds like he's talking about the TRIPS research processor from UT Austin.

See The TRIPS Project-- Tera-op Reliable Intelligently advanced Processing System

--Owen