Is it possible to translate an OpenMP program to CUDA using LLVM? I read that dragonegg has a OpenMP front-end and LLVM has a PTX back-end. I don’t know how mature these tools are. Please let me know. Thanks.
Department of Computer Science, University of Chicago
Computation Institute, Argonne National Laboratory
The PTX back-end is robust (it’s based on the sources used by nvcc), but I’m not sure about the OpenMP representation in LLVM IR. I believe the OpenMP constructs are already lowered into libgomp calls before leaving DragonEgg. It’s been awhile since I’ve loooked at it though.
If you use the PTX back-end and have any issues, please don’t hesitate to post to the list and cc: me at email@example.com
Have you heard of OpenACC? That's an OpenMP-like set of directives
specially tuned for accelerators: data regions, implicit or explicit
memory copyins/copyouts, gangs/vectors, etc. Developing your idea
further, in fact, leads to implementing an OpenACC frontend for LLVM.
PTX backend is definitely a necessary ingredient. Besides it, at some
point you would still need to develop a loop analysis system. Two
projects are doing related work: Polly  and KernelGen [2,3].
From: "Justin Holewinski" <firstname.lastname@example.org>
To: "Apala Guha" <email@example.com>
Sent: Friday, November 9, 2012 4:22:02 PM
Subject: Re: [LLVMdev] translating from OpenMP to CUDA
The PTX back-end is robust (it's based on the sources used by nvcc),
but I'm not sure about the OpenMP representation in LLVM IR. I
believe the OpenMP constructs are already lowered into libgomp calls
before leaving DragonEgg. It's been awhile since I've loooked at it
This is correct. dragonegg will turn the OpenMP pragmas into calls to libgomp. Unfortunately, these calls won't help you directly with the PTX backend (because there is no libgomp that runs on the GPU).