[PATCH][RFC] NVPTX Backend

Hi LLVMers,

We at NVIDIA would like to contribute back to the LLVM open-source community by up-streaming the NVPTX back-end for LLVM. This back-end is based on the sources used by NVIDIA, and currently provides significantly more functionality than the current PTX back-end. Some functionality is currently disabled due to dependencies on LLVM core changes that we are also in the process of up-streaming, but the back-end is very usable in its current state and would benefit all current and future users of the LLVM PTX back-end.

The goal is to phase out the existing PTX back-end, while maintaining compatibility with it. To that end, the NVPTX back-end maintains its own set of intrinsics but also supports the old PTX back-end intrinsics to ensure compatibility with out-of-tree users.

We would like to get your feedback on the attached patch to make sure it is up to LLVM commit quality. We would like to commit this as soon as the community is satisfied with it.

Also, as the current maintainer of the PTX back-end, I give my consent to deprecate it. J

Thanks,

Justin Holewinski

nvptx-backend-public.patch (1020 KB)

Hi LLVMers,

We at NVIDIA would like to contribute back to the LLVM open-source community by up-streaming the NVPTX back-end for LLVM. This back-end is based on the sources used by NVIDIA, and currently provides significantly more functionality than the current PTX back-end. Some functionality is currently disabled due to dependencies on LLVM core changes that we are also in the process of up-streaming, but the back-end is very usable in its current state and would benefit all current and future users of the LLVM PTX back-end.

Hi LLVMers,

We at NVIDIA would like to contribute back to the LLVM open-source community by up-streaming the NVPTX back-end for LLVM. This back-end is based on the sources used by NVIDIA, and currently provides significantly more functionality than the current PTX back-end. Some functionality is currently disabled due to dependencies on LLVM core changes that we are also in the process of up-streaming, but the back-end is very usable in its current state and would benefit all current and future users of the LLVM PTX back-end.

I’ve ended up having to make some local modifications to the PTX backend that I was preparing to post patches for, although, if it’s going away, I guess I shouldn’t bother. :slight_smile:

If the patches are still applicable, then by all means! :slight_smile:

Leafing through the patch briefly, though, I don’t think I see any support for the LLVM rem instruction, nor do I see any support for the llvm exp2/log2 intrinsics… am I missing something, or are these operations (still) unsupported in the NVPTX backend? On the plus side, it’s nice to see that at least two or three of the small fixes are already fixed.

Integer srem/urem are supported (see NVPTXInstrInfo.td), frem is not. At the moment, we only support intrinsics listed in the LLVM language reference, but the PTX versions of ex2/lg2 are exposed through the llvm.nvvm.{ex2,lg2}.approx.{f,d,ftz.f} intrinsics (see include/llvm/IntrinsicsNVVM.td).

Hi Justin,

Cool stuff, to be sure. Excited to see this.

As a pre-cursor to more involved technical feedback, I suggest going through and fixing up the coding style and formatting issues. Just glancing through, I see lots of things like function names starting with capital letters, compound statements with the opening brace on the line following an if/for/while/etc., single-statements after an 'if' enclosed in a compound statement, container.end() being evaluated every iterations of loops, etc, etc..

-Jim

Thanks for the feedback!

The attached patch addresses the style issues that have been found.

nvptx-backend-public.patch (1000 KB)

Thanks for the feedback!

The attached patch addresses the style issues that have been found.

From: Dan Bailey [mailto:dan@dneg.com]
Sent: Sunday, April 29, 2012 8:46 AM
To: Justin Holewinski
Cc: Jim Grosbach; llvm-commits@cs.uiuc.edu; Vinod Grover;
llvmdev@cs.uiuc.edu
Subject: Re: [llvm-commits] [PATCH][RFC] NVPTX Backend

Justin,

Firstly, this is great! It seems to be so much further forward in terms of
features compared to the PTX backend and I'm pleased to see all the support
for vector instructions, atomics, debugging information, etc - much of which
was still to be added to the other backend.

I have a few comments to make, but bear in mind my knowledge comes
solely from working on the PTX backend, so I don't know much about how
this compares to other backends and the feasibility of reusing components
designed to be shared across backends, plus my knowledge of tablegen is
also relatively limited.

This is in no particular order:

* Is there any reason why the 32-bit arch name is nvptx and the 64-bit arch
name is nvptx64, especially as the default for the NVCC compiler is to pass
the -m64 flag? Also, all the internal naming (NVPTXTargetMachine for
example) use 32 / 64 suffixes.

As far as I know, there is no fundamental reason for this. If it bugs you too much, I'm sure we could change it. :slight_smile:

* The register naming seems a little arbitrary as well, using FL prefixes for 64-
bit float and da prefixes for 64-bit float arguments for example.

Really, any choice is going to be arbitrary.

* Something I picked up in the NVVM IR spec - it seems to only be possible to
use the bar.sync 0 instruction. Unless this is being removed for PTX 3.0, the
spec (and the PTX backend) support using bar.sync {0..15}. The old PTX
intrinsic also supports a non-zero integer operand.

The NVVM intrinsic is there to implement CUDA's __syncthreads(). The old intrinsic is still exposed. I'll see about adding a proper NVVM intrinsic.

* I guess this raises the question of whether or not it's actually worthwhile
retaining compatibility with the old backend. I converted my Jet compiler to
use NVVM intrinsics and still found the address spaces were out of line, so it
was largely useless until I'd done the full conversion anyway. If the plan is to
deprecate the old backend (as I fully support), I'd be tempted to say maybe
just leave all these old intrinsics out too?

This is something I've debated back and forth. The original thought was to maintain as much backwards compatibility as possible with the old back-end. I don't think it hurts anything to leave the old intrinsics in there.

* I know this isn't an easy problem as I remember discussing this at length
with you at the time Justin, but supporting i8 for ld, st, cvt seems to introduce
a load of blocks of string concat code in tablegen. Is there not a better way of
tackling this, it looks like a fairly ugly solution to me?

Oh there are quite a few ugly TableGen solutions in there. :slight_smile: I'm trying to find ways to clean some of this stuff up, but it works so it's hard to justify devoting time to it.

* There are a few places in the code where there are blocks of commented
out code that I presume are waiting for upstreaming of other additions.
Some of these are documented, but many of them aren't. Maybe these
should either be removed for addition later or some explanation added as to
why they're not being used.

Good point. I'll see about adding more documentation to these.

* There seems to be a lot of features that are lacking tests too. I was trying to
find the equivalent flags to use for switching on and off FMA (a feature we
use quite a bit to compare results with x86) and couldn't find any tests
demonstrating this. Considering the amount of code in the backend
committed to handling mad / fma, there really should be a bunch of tests to
go along with this. In fact non of the nvptx-* flags seem to have any
accompanying test cases, nor did atomics, nvvm intrinsics or many of the
other instructions. I imagine NVidia has a fairly full featured CUDA testing
framework, but if this is going to be made open-source, I'd rather as many of
the instructions could have tests too. Having said that, maybe some of the
test cases for the PTX backend could be migrated across?

I'm working on converting the tests now.

* Finally I spotted a number of typos in the comments in the code that could
be cleaned up.

There's a lot of code here, so I might have missed some things. On the whole
though, most of this is fairly minor. It looks well designed and I'm looking
forward to this making it in. I'm especially excited to see the new pass for
vectorization and other passes to do code transformations. I managed to run
our fluid solver through this backend and it generated PTX that did the right
thing. I haven't had time to do any profiling comparing the old backend yet.

Can I put you down as a technical reviewer of the back-end? :slight_smile:

As the main consumer of the open-source back-end, your seal of approval would go a long way!

Justin Holewinski wrote:

From: Dan Bailey []
Sent: Sunday, April 29, 2012 8:46 AM
To: Justin Holewinski
Cc: Jim Grosbach; ; Vinod Grover;

Subject: Re: [llvm-commits] [PATCH][RFC] NVPTX Backend

Justin,

Firstly, this is great! It seems to be so much further forward in terms of
features compared to the PTX backend and I'm pleased to see all the support
for vector instructions, atomics, debugging information, etc - much of which
was still to be added to the other backend.

I have a few comments to make, but bear in mind my knowledge comes
solely from working on the PTX backend, so I don't know much about how
this compares to other backends and the feasibility of reusing components
designed to be shared across backends, plus my knowledge of tablegen is
also relatively limited.

This is in no particular order:

* Is there any reason why the 32-bit arch name is nvptx and the 64-bit arch
name is nvptx64, especially as the default for the NVCC compiler is to pass
the -m64 flag? Also, all the internal naming (NVPTXTargetMachine for
example) use 32 / 64 suffixes.
    

As far as I know, there is no fundamental reason for this.  If it bugs you too much, I'm sure we could change it. :)

  
* The register naming seems a little arbitrary as well, using FL prefixes for 64-
bit float and da prefixes for 64-bit float arguments for example.
    

Really, any choice is going to be arbitrary.

  
* Something I picked up in the NVVM IR spec - it seems to only be possible to
use the bar.sync 0 instruction. Unless this is being removed for PTX 3.0, the
spec (and the PTX backend) support using bar.sync {0..15}. The old PTX
intrinsic also supports a non-zero integer operand.
    

The NVVM intrinsic is there to implement CUDA's __syncthreads().  The old intrinsic is still exposed.  I'll see about adding a proper NVVM intrinsic.

  
* I guess this raises the question of whether or not it's actually worthwhile
retaining compatibility with the old backend. I converted my Jet compiler to
use NVVM intrinsics and still found the address spaces were out of line, so it
was largely useless until I'd done the full conversion anyway. If the plan is to
deprecate the old backend (as I fully support), I'd be tempted to say maybe
just leave all these old intrinsics out too?
    

This is something I've debated back and forth.  The original thought was to maintain as much backwards compatibility as possible with the old back-end.  I don't think it hurts anything to leave the old intrinsics in there.

  
* I know this isn't an easy problem as I remember discussing this at length
with you at the time Justin, but supporting i8 for ld, st, cvt seems to introduce
a load of blocks of string concat code in tablegen. Is there not a better way of
tackling this, it looks like a fairly ugly solution to me?
    

Oh there are quite a few ugly TableGen solutions in there. :)  I'm trying to find ways to clean some of this stuff up, but it works so it's hard to justify devoting time to it.

  
* There are a few places in the code where there are blocks of commented
out code that I presume are waiting for upstreaming of other additions.
Some of these are documented, but many of them aren't. Maybe these
should either be removed for addition later or some explanation added as to
why they're not being used.
    

Good point.  I'll see about adding more documentation to these.

  
* There seems to be a lot of features that are lacking tests too. I was trying to
find the equivalent flags to use for switching on and off FMA (a feature we
use quite a bit to compare results with x86) and couldn't find any tests
demonstrating this. Considering the amount of code in the backend
committed to handling mad / fma, there really should be a bunch of tests to
go along with this. In fact non of the nvptx-* flags seem to have any
accompanying test cases, nor did atomics, nvvm intrinsics or many of the
other instructions. I imagine NVidia has a fairly full featured CUDA testing
framework, but if this is going to be made open-source, I'd rather as many of
the instructions could have tests too. Having said that, maybe some of the
test cases for the PTX backend could be migrated across?
    

I'm working on converting the tests now.

  
* Finally I spotted a number of typos in the comments in the code that could
be cleaned up.

There's a lot of code here, so I might have missed some things. On the whole
though, most of this is fairly minor. It looks well designed and I'm looking
forward to this making it in. I'm especially excited to see the new pass for
vectorization and other passes to do code transformations. I managed to run
our fluid solver through this backend and it generated PTX that did the right
thing. I haven't had time to do any profiling comparing the old backend yet.
    

Can I put you down as a technical reviewer of the back-end? :)

As the main consumer of the open-source back-end, your seal of approval would go a long way

All sounds good to me. Sure, put me down. :slight_smile:

Thanks,
Dan

All,

One of the primary users of the existing PTX back-end provided a technical review for us and has confirmed that the back-end works as expected. Since this is replacing the existing PTX back-end and does not affect any other targets or parts of LLVM (other than adding the triple and intrinsics), and we have confirmation that it works as expected for users, can we commit this? If not, what other action is required?