[RFC] Upstreaming PACXX (Programing Accelerators with C++)

HI LLVM comunity,

after 3 years of development, various talks on LLVM-HPC and EuroLLVM and other scientific conferences I want to present my PhD research topic to the lists.

The main goal for my research was to develop a single-source programming model equal to CUDA or SYCL for accelerators supported by LLVM (e.g., Nvidia GPUs). PACXX uses Clang as front-end for code generation and comes with a runtime library (PACXX-RT) to execute kernels on the available hardware. Currently, PACXX supports Nvidia GPUs through the NVPTX Target and CUDA, CPUs through MCJIT (including whole function vectorization thanks to RV [1]) and has an experimental back-end for AMD GPUs using the AMDGPU Target and ROCm.

The main idea behind PACXX is the use of the LLVM IR as kernel code representation which is integrated into the executable together with the PACXX-RT. At runtime of the program the PACXX-RT compiles the IR to the final MC level and hands it over to the device. Since, PACXX does currently not enforce any major restrictions on the C++ code we managed to run (almost) arbitrary C++ code on GPUs including range-v3 [2, 3].

A short vector addition example using PACXX:

using namespace pacxx::v2;
int main(int argc, char *argv) {
   // get the default executor
   auto &exec = Executor::get();
    size_t size = 128;
    std::vector<int> a(size, 1);
    std::vector<int> b(size, 2);
    std::vector<int> c(size, 0);

    // allocate device side memory
    auto &da = exec.allocate<int>(a.size());
    auto &db = exec.allocate<int>(b.size());
    auto &dc = exec.allocate<int>(c.size());
    // copy data to the accelerator
    da.upload(a);
    db.upload(b);
    dc.upload(c);
    // get the raw pointer
    auto pa = da.get();
    auto pb = db.get();
    auto pc = dc.get();

    // define the computation
    auto vadd = [=](auto &config) {
      auto i = config.get_global(0);
      if (i < size)
       pc[i] = pa[i] + pb[i];
    };

    // launch and synchronize
    std::promise<void> promise;
    auto future = exec.launch(vadd, {{1}, {128}}, promise);
    future.wait();
    // copy back the data
    dc.download(c);
}

Recently, I open sourced PACXX on github [3] under the same license LLVM is currently using.
Since my PhD is now in its final stage I wanted to ask if there is interest in having such an SPMD programming model upstreamed.
PACXX is currently on par with release_60 and only requires minor modifications to Clang, e.g., a command line switch, C++ attributes, some diagnostics and metadata generation during code gen.
The PACXX-RT can be integrated into the LLVM build system and may remain a standalone project. (BTW, may I ask to add PACXX to the LLVM projects?).

Looking forward for your feedback.

Cheers,
Michael Haidl

[1] GitHub - cdl-saarland/rv: RV: A Unified Region Vectorizer for LLVM
[2] https://github.com/ericniebler/range-v3
[3] https://dl.acm.org/authorize?N20051
[4] https://github.com/pacxx/pacxx-llvm

HI LLVM comunity,

after 3 years of development, various talks on LLVM-HPC and EuroLLVM and other scientific conferences I want to present my PhD research topic to the lists.

The main goal for my research was to develop a single-source programming model equal to CUDA or SYCL for accelerators supported by LLVM (e.g., Nvidia GPUs). PACXX uses Clang as front-end for code generation and comes with a runtime library (PACXX-RT) to execute kernels on the available hardware. Currently, PACXX supports Nvidia GPUs through the NVPTX Target and CUDA, CPUs through MCJIT (including whole function vectorization thanks to RV [1]) and has an experimental back-end for AMD GPUs using the AMDGPU Target and ROCm.

The main idea behind PACXX is the use of the LLVM IR as kernel code representation which is integrated into the executable together with the PACXX-RT. At runtime of the program the PACXX-RT compiles the IR to the final MC level and hands it over to the device. Since, PACXX does currently not enforce any major restrictions on the C++ code we managed to run (almost) arbitrary C++ code on GPUs including range-v3 [2, 3].

A short vector addition example using PACXX:

using namespace pacxx::v2;
int main(int argc, char *argv) {
   // get the default executor
   auto &exec = Executor::get();
    size_t size = 128;
    std::vector<int> a(size, 1);
    std::vector<int> b(size, 2);
    std::vector<int> c(size, 0);

    // allocate device side memory
    auto &da = exec.allocate<int>(a.size());
    auto &db = exec.allocate<int>(b.size());
    auto &dc = exec.allocate<int>(c.size());
    // copy data to the accelerator
    da.upload(a);
    db.upload(b);
    dc.upload(c);
    // get the raw pointer
    auto pa = da.get();
    auto pb = db.get();
    auto pc = dc.get();

    // define the computation
    auto vadd = [=](auto &config) {
      auto i = config.get_global(0);
      if (i < size)
       pc[i] = pa[i] + pb[i];
    };

    // launch and synchronize
    std::promise<void> promise;
    auto future = exec.launch(vadd, {{1}, {128}}, promise);
    future.wait();
    // copy back the data
    dc.download(c);
}

Recently, I open sourced PACXX on github [3] under the same license LLVM is currently using.
Since my PhD is now in its final stage I wanted to ask if there is interest in having such an SPMD programming model upstreamed.
PACXX is currently on par with release_60 and only requires minor modifications to Clang, e.g., a command line switch, C++ attributes, some diagnostics and metadata generation during code gen.
The PACXX-RT can be integrated into the LLVM build system and may remain a standalone project. (BTW, may I ask to add PACXX to the LLVM projects?).

Looking forward for your feedback.

Cheers,
Michael Haidl

[1] GitHub - cdl-saarland/rv: RV: A Unified Region Vectorizer for LLVM
[2] https://github.com/ericniebler/range-v3
[3] https://dl.acm.org/authorize?N20051
[4] https://github.com/pacxx/pacxx-llvm

Interesting.

I do something similar for D targeting CUDA (via NVPTX) and OpenCL (via my forward proved fork of Khronos’ SPIRV-LLVM)[1], except all the code generation is done at compile time. The runtime is aided by compile time reflection so that calling kernels is done by symbol.

What kind of performance difference do you see running code that was not developed with GPU in mind (e.g. range-v3) vs code that was?
What restrictions do you apply? I assume virtual functions, recursion. What else?

How does pacxx's SPMD model differ from what one can do in LLVM at the moment?

Nic

[1]: http://github.com/libmir/dcompute/

I was going to say, this reminds me of Kai's presentation at Fosdem yesterday.

It's always good to see the cross-architecture power of LLVM being
used in creative ways! :slight_smile:

cheers,
--renato

HI LLVM comunity, after 3 years of development, various

    > talks on LLVM-HPC and EuroLLVM and other scientific
    > conferences I want to present my PhD research topic to the
    > lists.

[...]

    > Recently, I open sourced PACXX on github [3] under the same
    > license LLVM is currently using.

Amazing! :slight_smile:

    > Since my PhD is now in its final stage I wanted to ask if
    > there is interest in having such an SPMD programming model
    > upstreamed.

There is probably a lot of things in your code that could be useful for
a lot of other projects related to heterogeneous computing.

It would be nice to have some common support upstreamed for all these
heterogeneous C++ languages (CUDA/OpenMP/OpenACC/OpenCL C++/SYCL/C++AMP/HCC/...)
to ease their implementation or up-streaming.
For now only CUDA & OpenMP are up-streamed I think.
Of course it is not obvious with all these heterogeneous dialects coming
with some subtle syntax, feature and semantics differences...

Are you relying on some of the up-streamed CUDA/OpenMP code for your implementation?

Thanks for your work.

Yeah, Nic is the author of this awesome LDC functionality. :slight_smile:

Regards,
Kai

This is cool. I’m very glad to see your PhD research was done in a production environment and that you are open-sourcing it.

PACXX looks a lot like SYCL. Have you considered whether it can be evolved into a high-quality implementation of the SYCL standard API? There’s a lot of value in implementing a standardized API with multiple implementations. I’ve used ComputeCpp, triSYCL, and sycl-gtx recently, and each has limitations that could be overcome by having SYCL support in Clang/LLVM.

Your contribution of a good SYCL implementation would be really valuable, since there appears to be no implementation that both performs well and is open-source. I completely understand if you have no time in turn PACXX into SYCL, but if you have technical arguments against doing so even if time permitted, I think they’d be useful to share, although perhaps not in this forum.

On a practical level, PACXX seems to require some software hardening. I tried to follow the docs and build it locally, but had some issues (GitHub issues were created, so I’ll omit details here) and ultimately failed to compile your example program. I’d love to be able to try it out, since I’ve recently evaluated SYCL and Boost.Compute using https://github.com/ParRes/Kernels, and it seems like PACXX is a peer of these.

Best,

Jeff

Interesting.

I do something similar for D targeting CUDA (via NVPTX) and OpenCL (via my
forward proved fork of Khronos’ SPIRV-LLVM)[1], except all the code
generation is done at compile time. The runtime is aided by compile time
reflection so that calling kernels is done by symbol.

What kind of performance difference do you see running code that was not
developed with GPU in mind (e.g. range-v3) vs code that was?

[Haidl, Michael]
We extended range-v3 with a few GPU enabled algorithms to exploit especially views from range-v3 for execution on GPUs. While the kernels are clearly designed for GPUs mixing it with code like range-v3's views showed no negative performance impacts. We evaluated against thrust in the linked paper and were able to get on par with thrust. The views of range-v3 really come with zero-cost abstractions.

What restrictions do you apply? I assume virtual functions, recursion. What
else?

[Haidl, Michael]
Virtual functions are still a problem. Recursion works to some point (the stack frame size on the GPU is the limitation here). Since PACXX builds on CUDA and HIP we can assume that recursion is possible (with minor intervention of the developer setting the stack size right).
Exception handling in kernels is currently not possible in PACXX.

How does pacxx's SPMD model differ from what one can do in LLVM at the
moment?

[Haidl, Michael]
There is not much difference. I have a little experimental branch that accepts CUDA as input code and compiles it with PACXX. The only problem are device specific stuff like nvptx intrinsics generated by clang for CUDA what makes a portable execution currently impossible.

    > HI LLVM comunity, after 3 years of development, various
    > talks on LLVM-HPC and EuroLLVM and other scientific
    > conferences I want to present my PhD research topic to the
    > lists.

[...]

    > Recently, I open sourced PACXX on github [3] under the same
    > license LLVM is currently using.

Amazing! :slight_smile:

    > Since my PhD is now in its final stage I wanted to ask if
    > there is interest in having such an SPMD programming model
    > upstreamed.

There is probably a lot of things in your code that could be useful for a lot of
other projects related to heterogeneous computing.

It would be nice to have some common support upstreamed for all these
heterogeneous C++ languages (CUDA/OpenMP/OpenACC/OpenCL
C++/SYCL/C++AMP/HCC/...) to ease their implementation or up-streaming.
For now only CUDA & OpenMP are up-streamed I think.
Of course it is not obvious with all these heterogeneous dialects coming with
some subtle syntax, feature and semantics differences...

Are you relying on some of the up-streamed CUDA/OpenMP code for your
implementation?

[Haidl, Michael]
There are parts that are handled for PACXX equally to CUDA. For example the handling of variables declared as shared memory. Also each C++ generic attribute used for PACXX has a GNU counterpart in CUDA. It should be possible to converge some parts of PACXX with CUDA with minor modifications to PACXX. I will look into it.

HI Jeff,

thanks for your answers. Comments are inlined :wink: