grosser at fim.uni-passau.de
Tue Mar 22 14:28:12 CDT 2011
On 03/22/2011 01:56 PM, Reid Kleckner wrote:
> On Tue, Mar 22, 2011 at 1:36 PM, Gokul Ramaswamy
> <gokulhcramaswamy at gmail.com <mailto:gokulhcramaswamy at gmail.com>> wrote:
> Hi Duncan Sands,
> As I have understood, GOMP and OpenMP provides support for
> parallelizing program at source program level. But I am at the IR
> level. That is I am trying to parallelize the IR code. This is the
> case of automatic parallelization. The programmer writing the code
> does not have any idea of parallelization going behind the hood.
> So my question is instead of support at the source program level, is
> the an support at the LLVM IR level to parallelize things ??
> No, you have to insert calls to things like pthreads or GOMP or OpenMP
> or whatever threading runtime you choose.
Which is what we also do in Polly.
In case you just have the simple case of two statements you want to
execute in parallel, I propose to write this as OpenMP annotated C code,
compile the code with dragonegg to LLVM-IR and have a look what code is
generated. You will need to create similar code and similar function
calls if you want to do it at the LLVM-IR level.
One thing that might simplify the code is to specify in OpenMP that you
want to be able to select choices at runtime. A common construct is:
This will stop dragonegg from inlining some OpenMP runtime calls, which
could complicate the code unnecessarily.
P.S.: In case of directly inserting OpenMP function callsn it would be
nice to have support for a set of LLVM intrinsics that will
automatically be lowered to the relevant OpenMP/mpc.sf.net function
calls. Let me know when you think about working on such a thing.
More information about the LLVMdev