Parallelization

Hi All,

   I am new to LLVM. So please help me out. Here is what I am trying to
achieve:

      If there are 2 statements in a source program -
         S1;
         S2;

      and I know these is no data and control dependency between them and
both take large amount of time to execute. So I want to execute them
in parallel.

      So as S1 starts executing, I want to launch another thread and
execute S2 in parallel.

  I need help on how to launch a new thread and schedule some specific
code on this new thread. I searched for it but did not get satisfiable
results. Please help me out LLVM Developers.

Regards,
Gokul Ramaswamy H.C

Hi Gokul Ramaswamy,

    I am new to LLVM. So please help me out. Here is what I am trying to
achieve:

       If there are 2 statements in a source program -
          S1;
          S2;

       and I know these is no data and control dependency between them and
both take large amount of time to execute. So I want to execute them
in parallel.

       So as S1 starts executing, I want to launch another thread and
execute S2 in parallel.

   I need help on how to launch a new thread and schedule some specific
code on this new thread. I searched for it but did not get satisfiable
results. Please help me out LLVM Developers.

llvm-gcc and dragonegg support GOMP (gnu open-mp). The way it works is that the
front-end lowers parallel constructs into library calls, extra functions and so
on.

Ciao, Duncan.

Hi Duncan Sands,

As I have understood, GOMP and OpenMP provides support for parallelizing program at source program level. But I am at the IR level. That is I am trying to parallelize the IR code. This is the case of automatic parallelization. The programmer writing the code does not have any idea of parallelization going behind the hood.

So my question is instead of support at the source program level, is the an support at the LLVM IR level to parallelize things ??

Regards,
Gokul Ramaswamy H.C

No, you have to insert calls to things like pthreads or GOMP or OpenMP or whatever threading runtime you choose.

Reid

Which is what we also do in Polly.

In case you just have the simple case of two statements you want to execute in parallel, I propose to write this as OpenMP annotated C code, compile the code with dragonegg to LLVM-IR and have a look what code is generated. You will need to create similar code and similar function calls if you want to do it at the LLVM-IR level.

One thing that might simplify the code is to specify in OpenMP that you want to be able to select choices at runtime. A common construct is:

SCHEDULE(runtime)

This will stop dragonegg from inlining some OpenMP runtime calls, which could complicate the code unnecessarily.

Cheers
Tobi

P.S.: In case of directly inserting OpenMP function callsn it would be nice to have support for a set of LLVM intrinsics that will automatically be lowered to the relevant OpenMP/mpc.sf.net function calls. Let me know when you think about working on such a thing.

Hi,

I am looking into something similar as well for auto-parallelization i.e. some sort of low level support at the IR level for parallelization.
I’d be interested in collaborating with anyone who is working on the same.

From a brief look at the architectural overview of Polly, it seems as if the parallel code generation is being done at the IR level since the input file is an LLVM IR file?
Would it be possible to re-utilize that functionality for building something to this end?

Thanks
Nipun

Adding to Tobias' comments following is what Polly with OpenMP support does.

If Polly detects two statements(preferably for loops) can be
parallelized it will generate the required GOMP calls automatically.
As of now the interface is not designed in a such a way that it can be
reused.

If we find that designing such OpenMP intrinsics is useful for people
we can think about that.

Regards,