Documentation about converting GIMPLE IR to LLVM IR in LLVM-GCC/DragonEgg

Dear All,

I am trying to understand the process followed for converting GIMPLE IR to LLVM IR in LLVM-GCC/DragonEgg - more importantly conversion of OpenMP extended GIMPLE IR to LLVM IR. It would be great if anybody points me to some documentation before I my-self delve into the understanding of related source code.

Dear All,

I am trying to understand the process followed for converting GIMPLE IR to LLVM IR in LLVM-GCC/DragonEgg - more importantly conversion of OpenMP extended GIMPLE IR to LLVM IR. It would be great if anybody points me to some documentation before I my-self delve into the understanding of related source code.

I don’t think there is formal documentation, but you should have a look at dragonegg sources (http://dragonegg.llvm.org) LLVM-GCC is dead for some times now.

There is a lot of comments in the code. See especially file Convert.cpp (http://llvm.org/svn/llvm-project/dragonegg/trunk/src/Convert.cpp) which take care of the GIMPLE to LLVM conversion.


Cheers
-mahesha


LLVM Developers mailing list
LLVMdev@cs.uiuc.edu http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

– Jean-Daniel

Hi Mahesha,

I am trying to understand the process followed for converting GIMPLE IR to LLVM
IR in LLVM-GCC/DragonEgg - more importantly conversion of OpenMP extended GIMPLE
IR to LLVM IR. It would be great if anybody points me to some documentation
before I my-self delve into the understanding of related source code.

dragonegg doesn't have to do anything special for openmp, since gcc has already
lowered it to a bunch of extra functions and library calls by that point.

Ciao, Duncan.

PS: Here's an example:

void foo()
{
   int i;

   #pragma omp parallel
     {
     #pragma omp parallel
       {
       #pragma omp parallel
         {
           i++;
         }
       }
     }
}

-> (the LLVM IR is a direct transliteration of the gimple):

target datalayout = "e-p:64:64:64-S128-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f16:16:16-f32:32:32-f64:64:64-f128:128:128-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64"
target triple = "x86_64-unknown-linux-gnu"

module asm "\09.ident\09\22GCC: (GNU) 4.7.1 20120603 (prerelease) LLVM: 3.2svn\22"

%struct..omp_data_s.2 = type { i32* }
%struct..omp_data_s.1 = type { i32* }
%struct..omp_data_s.0 = type { i32 }

define internal void @foo._omp_fn.2(i8* nocapture %.omp_data_i) nounwind uwtable {
entry:
   %0 = bitcast i8* %.omp_data_i to i32**
   %1 = load i32** %0, align 8
   %2 = load i32* %1, align 4
   %3 = add i32 %2, 1
   store i32 %3, i32* %1, align 4
   ret void
}

define internal void @foo._omp_fn.1(i8* nocapture %.omp_data_i) nounwind uwtable {
entry:
   %.omp_data_o.3 = alloca %struct..omp_data_s.2, align 8
   %0 = bitcast i8* %.omp_data_i to i32**
   %1 = load i32** %0, align 8
   %2 = getelementptr inbounds %struct..omp_data_s.2* %.omp_data_o.3, i64 0, i32 0
   store i32* %1, i32** %2, align 8
   %3 = bitcast %struct..omp_data_s.2* %.omp_data_o.3 to i8*
   call void @GOMP_parallel_start(void (i8*)* @foo._omp_fn.2, i8* %3, i32 0) nounwind
   call void @foo._omp_fn.2(i8* %3) nounwind uwtable
   call void @GOMP_parallel_end() nounwind
   ret void
}

declare void @GOMP_parallel_start(void (i8*)*, i8*, i32) nounwind

declare void @GOMP_parallel_end() nounwind

define internal void @foo._omp_fn.0(i8* %.omp_data_i) nounwind uwtable {
entry:
   %.omp_data_o.4 = alloca %struct..omp_data_s.1, align 8
   %0 = bitcast i8* %.omp_data_i to i32*
   %1 = getelementptr inbounds %struct..omp_data_s.1* %.omp_data_o.4, i64 0, i32 0
   store i32* %0, i32** %1, align 8
   %2 = bitcast %struct..omp_data_s.1* %.omp_data_o.4 to i8*
   call void @GOMP_parallel_start(void (i8*)* @foo._omp_fn.1, i8* %2, i32 0) nounwind
   call void @foo._omp_fn.1(i8* %2) nounwind uwtable
   call void @GOMP_parallel_end() nounwind
   ret void
}

define void @foo(...) nounwind uwtable {
entry:
   %.omp_data_o.5 = alloca %struct..omp_data_s.0, align 8
   %0 = bitcast %struct..omp_data_s.0* %.omp_data_o.5 to i8*
   call void @GOMP_parallel_start(void (i8*)* @foo._omp_fn.0, i8* %0, i32 0) nounwind
   call void @foo._omp_fn.0(i8* %0) nounwind uwtable
   call void @GOMP_parallel_end() nounwind
   ret void
}

Hello Duncan Sands,

From your reply, what I can understand is that there is no any new OPENMP specific instructions introduced into LLVM IR as a part of DragonEgg project since GCC has already done the job of lowering OpenMP directives into GOMP runtime library calls at LOW GIMPLE IR level.

Now, it throws up following questions.

  1. Am I correct that DragoEgg should logically supports all the OMP benchmarks as supported by GCC (4.5 or later)?
  2. If we decide to support OpenMP directly in LLVM what is the better way to handle it? Suppose, assume that we decide to support OpenMP through Clang. Then, do you think that Clang should also follow the same mechanism as DragoEgg does by lowering all the OpenMP directives into OMP runtime library calls so that it is not necessary to add any extra new OpenMP related instructions into LLVM IR?
  3. Assume that tomorrow, some other front-end other than GCC/Clang is being plugged into LLVM back-end. Or assume that sometime later new FROTRAN front-end is being introduced along with Clang. Or Someone wants to plug-in EDG front-end into LVVM back-end. Or OpenACC needs to be supported in LLVM. Or OpenACC may get merge with OpenMP and becomes of one of the standard platforms to program heterogeneous architectures. By considering all these and other future possibilities, what is the best way to support OpenMP in LLVM. By best, I mean here that LLVM should not compromise with any other compilers in terms of OpenMP features completeness, bench mark results, and the OpenMP infrastructure laid out in LLVM should be flexible enough to handle above mentioned future possibilities.
    It would be great, if you throw some light upon it.

-mahesh

Hi Mahesha,

From your reply, what I can understand is that there is no any new OPENMP
specific instructions introduced into LLVM IR as a part of DragonEgg project
since GCC has already done the job of lowering OpenMP directives into GOMP
runtime library calls at LOW GIMPLE IR level.

correct.

Now, it throws up following questions.

1. Am I correct that DragoEgg should logically supports all the OMP benchmarks
    as supported by GCC (4.5 or later)?

Yes.

2. If we decide to support OpenMP directly in LLVM what is the better way to
    handle it? Suppose, assume that we decide to support OpenMP through Clang.
    Then, do you think that Clang should also follow the same mechanism as
    DragoEgg does by lowering all the OpenMP directives into OMP runtime library
    calls so that it is not necessary to add any extra new OpenMP related
    instructions into LLVM IR?

Personally I think openmp lowering should (mostly) be done in clang, at least to
begin with. Maybe over time some openmp constructs or helper IR manipulation
routines could be added to LLVM.

3. Assume that tomorrow, some other front-end other than GCC/Clang is being
    plugged into LLVM back-end. Or assume that sometime later new FROTRAN
    front-end is being introduced along with Clang. Or Someone wants to plug-in
    EDG front-end into LVVM back-end. Or OpenACC needs to be supported in LLVM.
    Or OpenACC may get merge with OpenMP and becomes of one of the standard
    platforms to program heterogeneous architectures. By considering all these
    and other future possibilities, what is the best way to support OpenMP in
    LLVM. By best, I mean here that LLVM should not compromise with any other
    compilers in terms of OpenMP features completeness, bench mark results, and
    the OpenMP infrastructure laid out in LLVM should be flexible enough to
    handle above mentioned future possibilities.

LLVM IR is not a universal internal language representation. It is a mistake
to try to add everything and the kitchen sink to it (eg: openmp pragmas) just
because many languages/front-ends may support openmp, in my opinion.

Ciao, Duncan.