Reducing Generic Address Space Usage

This is a follow-up discussion on http://lists.cs.uiuc.edu/pipermail/cfe-commits/Week-of-Mon-20140324/101899.html. The front-end change was already pushed in r204677, so we want to continue with the IR optimization.

In general, we want to write an IR pass to convert generic address space usage to non-generic address space usage, because accessing the generic address space in CUDA and OpenCL is significantly slower than accessing non-generic ones (such as shared and constant),.

Here is an example Justin gave:

%ptr = …
%val = load i32* %ptr

In this case, %ptr is a generic address space pointer (assuming an address space mapping where 0 is generic). But if an analysis can prove that the pointer %ptr was originally addrspacecast’d from a specific address space (or some other mechanism through which the pointer’s specific address space can be determined), it may be beneficial to explicitly convert the IR to something like:

%ptr = …
%ptr.0 = addrspacecast i32* to i32 addrspace(3)*
%val = load i32 addrspace(3)* %ptr.0

Such a translation may generate better code for some targets.

There are two major design decisions we need to make:

  1. Where does this pass live? Target-independent or target-dependent?

Both NVPTX and R600 backend want this optimization, which seems a good justification for making this optimization target-independent.

However, we have three concerns on this:
a) I doubt this optimization is valid for all targets, because LLVM language reference (http://llvm.org/docs/LangRef.html#addrspacecast-to-instruction) says addrspacecast “can be a no-op cast or a complex value modification, depending on the target and the address space pair.”
b) NVPTX and R600 have different address numbering for the generic address space, which makes things more complicated.
c) We don’t have a good understanding of the R600 backend.

Therefore, I would vote for making this optimization NVPTX-specific for now. If other targets need this, we can later think about how to reuse the code.

  1. How effective do we want this optimization to be?

In the short term, I want it to be able to eliminate unnecessary non-generic-to-generic addrspacecasts the front-end generates for the NVPTX target. For example,

%p1 = addrspace i32 addrspace(3)* %p0 to i32*
%v = load i32* %p1

=>

%v = load i32 addrspace(3)* %p0

We want similar optimization for store+addrspacecast and gep+addrspacecast as well.

In a long term, we could for sure improve this optimization to handle more instructions and more patterns.

Jingyue

I think most of the simple cast optimizations would be acceptable. The addrspacecasted pointer still needs to point to the same memory location, so changing an access to use a different address space would be OK. I think canonicalizing accesses to use the original address space of a casted pointer when possible would make sense. R600 currently does not support the flat address space instructions intended to use for the generic address space. I posted a patch a while ago that half added it, which I can try to work on finishing if it would help. I also do not understand how NVPTX uses address spaces, particularly how it can use 0 as the the generic address space. I believe most of the cast simplifications that apply to bitcasts of pointers also apply to addrspacecast. I have some patches waiting that extend some of the more basic ones to understand addrspacecast (e.g. ), plus a few more that I haven’t posted yet. Mostly they are little cast simplifications like your example in instcombine, but also SROA to eliminate allocas that are addrspacecasted. -Matt

However, we have three concerns on this:
a) I doubt this optimization is valid for all targets, because LLVM
language reference (
http://llvm.org/docs/LangRef.html#addrspacecast-to-instruction) says
addrspacecast "can be a no-op cast or a complex value modification,
depending on the target and the address space pair."

I think most of the simple cast optimizations would be acceptable. The
addrspacecasted pointer still needs to point to the same memory location,
so changing an access to use a different address space would be OK. I think
canonicalizing accesses to use the original address space of a casted
pointer when possible would make sense.

"the address space conversion is legal then both result and operand refer
to the same memory location". I don't quite understand this sentence. Does
the same memory location mean the same numeric value?

  b) NVPTX and R600 have different address numbering for the generic
address space, which makes things more complicated.
c) We don't have a good understanding of the R600 backend.

R600 currently does not support the flat address space instructions
intended to use for the generic address space. I posted a patch a while ago
that half added it, which I can try to work on finishing if it would help.

I also do not understand how NVPTX uses address spaces, particularly how
it can use 0 as the the generic address space.

NVPTX backend generates ld.f32 for reading from the generic address space.
There's no special machine instruction to read/write from/to the generic
address space in R600?

  2. How effective do we want this optimization to be?

In the short term, I want it to be able to eliminate unnecessary
non-generic-to-generic addrspacecasts the front-end generates for the NVPTX
target. For example,

%p1 = addrspace i32 addrspace(3)* %p0 to i32*
%v = load i32* %p1

=>

%v = load i32 addrspace(3)* %p0

We want similar optimization for store+addrspacecast and
gep+addrspacecast as well.

In a long term, we could for sure improve this optimization to handle
more instructions and more patterns.

  I believe most of the cast simplifications that apply to bitcasts of
pointers also apply to addrspacecast. I have some patches waiting that
extend some of the more basic ones to understand addrspacecast (e.g.
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140120/202296.html),
plus a few more that I haven't posted yet. Mostly they are little cast
simplifications like your example in instcombine, but also SROA to
eliminate allocas that are addrspacecasted.

We also think InstCombine is a good place to put this optimization, if we
decide to go with target-independent. Looking forward to your patches!

No, it means they could both have different values that point to the same physical location. Storing to a pointer in one address space should have the same effect as storing to the addrspacecasted pointer, though it might not use the same value or instructions to do so. New hardware does have flat address space instructions, which is what my patch adds support for. They’re just not defined in the target yet. This flat address space is separate different from 0 / the default. I think of addrspace(0) as the address space of allocas, so I don’t understand how that can be consolidated with generic accesses of the other address spaces. Does NVPTX not differentiate between accesses of a generic pointer and private / alloca’d memory? I think that strategy only gets you part of the way to ideal. For example, preferring to use the original address space works well for accesses to objects where you start with the known address space. You could also have a function with a generic address space argument casted back to a specific address space. Preferring the original address space in that case is the opposite of what you want, although I expect this case will end up being much less common in real code and will tend to go away after inlining.

We handle alloca by expanding it to a local stack reservation plus a pointer conversion to the generic address space. So if we have IR like the following: This will really get expanded to something like the following at MachineInstr-level (in pseudo-code): With the proposed optimization, this would be optimized back to a non-generic store: This turns the address space conversion sequence into a no-op (assuming no other users) that can be eliminated, and a non-generic store is likely to be more efficient than a generic store.

This is a follow-up discussion on
http://lists.cs.uiuc.edu/pipermail/cfe-commits/Week-of-Mon-20140324/101899.html.
The front-end change was already pushed in r204677, so we want to continue
with the IR optimization.

In general, we want to write an IR pass to convert generic address space
usage to non-generic address space usage, because accessing the generic
address space in CUDA and OpenCL is significantly slower than accessing
non-generic ones (such as shared and constant),.

Here is an example Justin gave:

  %ptr = ...
  %val = load i32* %ptr

In this case, %ptr is a generic address space pointer (assuming an address
space mapping where 0 is generic). But if an analysis can prove that the
pointer %ptr was originally addrspacecast'd from a specific address space
(or some other mechanism through which the pointer's specific address space
can be determined), it may be beneficial to explicitly convert the IR to
something like:

  %ptr = ...
  %ptr.0 = addrspacecast i32* to i32 addrspace(3)*
  %val = load i32 addrspace(3)* %ptr.0

Such a translation may generate better code for some targets.

I think a slight variation of this optimization may be useful for the
R600 backend. One thing I have been working on is migrating allocas
to different address spaces, which in some cases may improve
performance. Here is an example:

%ptr = alloca [5 x i32]
...

Would become:

@local_mem = internal addrspace(3) unnamed_addr global [5 x i32]

%ptr = addrspacecast [5 x i32] addrspace(3)* @local_me to i32*
...

In this case I would like all users of %ptr to read and write
address space 3 rather than address space 0, and it sounds like your
proposed optimization pass could do this.

There are two major design decisions we need to make:

1. Where does this pass live? Target-independent or target-dependent?

Both NVPTX and R600 backend want this optimization, which seems a good
justification for making this optimization target-independent.

I agree here.

However, we have three concerns on this:
a) I doubt this optimization is valid for all targets, because LLVM
language reference (
http://llvm.org/docs/LangRef.html#addrspacecast-to-instruction) says
addrspacecast "can be a no-op cast or a complex value modification,
depending on the target and the address space pair."

Does it matter that it isn't valid for all targets as long as it is
valid for some? We could add it, but not run it by default.

b) NVPTX and R600 have different address numbering for the generic address
space, which makes things more complicated.

Could we add a TargetLowering callback that the pass can use to determine
whether or not is is profitable to replace one address space with
another?

-Tom

However, we have three concerns on this:
a) I doubt this optimization is valid for all targets, because LLVM
language reference (
http://llvm.org/docs/LangRef.html#addrspacecast-to-instruction) says
addrspacecast "can be a no-op cast or a complex value modification,
depending on the target and the address space pair."

I think most of the simple cast optimizations would be acceptable. The
addrspacecasted pointer still needs to point to the same memory location,
so changing an access to use a different address space would be OK. I think
canonicalizing accesses to use the original address space of a casted
pointer when possible would make sense.

"the address space conversion is legal then both result and operand refer
to the same memory location". I don't quite understand this sentence. Does
the same memory location mean the same numeric value?

No, it means they could both have different values that point to the same
physical location. Storing to a pointer in one address space should have
the same effect as storing to the addrspacecasted pointer, though it might
not use the same value or instructions to do so.

That makes sense. Thanks!

  b) NVPTX and R600 have different address numbering for the generic
address space, which makes things more complicated.
c) We don't have a good understanding of the R600 backend.

R600 currently does not support the flat address space instructions
intended to use for the generic address space. I posted a patch a while ago
that half added it, which I can try to work on finishing if it would help.

I also do not understand how NVPTX uses address spaces, particularly how
it can use 0 as the the generic address space.

NVPTX backend generates ld.f32 for reading from the generic address
space. There's no special machine instruction to read/write from/to the
generic address space in R600?

New hardware does have flat address space instructions, which is what my
patch adds support for. They're just not defined in the target yet. This
flat address space is separate different from 0 / the default. I think of
addrspace(0) as the address space of allocas, so I don't understand how
that can be consolidated with generic accesses of the other address spaces.
Does NVPTX not differentiate between accesses of a generic pointer and
private / alloca'd memory?

See Justin's followup. Looks like this optimization can benefit local
accesses as well.

  2. How effective do we want this optimization to be?

In the short term, I want it to be able to eliminate unnecessary
non-generic-to-generic addrspacecasts the front-end generates for the NVPTX
target. For example,

%p1 = addrspace i32 addrspace(3)* %p0 to i32*
%v = load i32* %p1

=>

%v = load i32 addrspace(3)* %p0

We want similar optimization for store+addrspacecast and
gep+addrspacecast as well.

In a long term, we could for sure improve this optimization to handle
more instructions and more patterns.

   I believe most of the cast simplifications that apply to bitcasts of
pointers also apply to addrspacecast. I have some patches waiting that
extend some of the more basic ones to understand addrspacecast (e.g.
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140120/202296.html),
plus a few more that I haven't posted yet. Mostly they are little cast
simplifications like your example in instcombine, but also SROA to
eliminate allocas that are addrspacecasted.

We also think InstCombine is a good place to put this optimization, if
we decide to go with target-independent. Looking forward to your patches!

I think that strategy only gets you part of the way to ideal. For example,
preferring to use the original address space works well for accesses to
objects where you start with the known address space. You could also have a
function with a generic address space argument casted back to a specific
address space. Preferring the original address space in that case is the
opposite of what you want, although I expect this case will end up being
much less common in real code and will tend to go away after inlining.

You're right. I overlooked this case. I doubt a CUDA program would even
uses generic-to-non-generic casts, because non-generic address space
qualifiers only qualify declarations.

The backend can indicate which address spaces it prefers using some flags
(e.g., preferNonGenericPointers as Justin suggested). InstCombine can then
look at these flags to decide what to do.

This optimization can benefit all address spaces. Imagine you have a library function that takes pointer arguments in the generic address space: Now let’s say this function is called twice in a kernel function: Assuming ‘foo’ is inlined, you could convert the loads in the first call to ‘ld.global’ and the loads from the second call to ‘ld.shared’. The optimization should also treat the absence of target information as a flag to just not run.

Just a note of caution: for some of us, address spaces are semantically important. (i.e. having a cast introduced from one to another would be incorrect) I have no problem with the mechanism you’re describing being implemented, but it needs to be an opt in feature. No opinion, but if it is target independent, it needs to be behind an optin target hook. Just to note, this last bit raises much less worries for me about correctness of my work. If you’ve loading from a pointer which was in different address space, it seems very logical to combine that with the load. We’d also never generate code like that. :slight_smile: To restate my concern in general terms, it’s the introduction of new casts which worry me, not the exploitation/optimization of existing ones.

This interpretation fits my reasoning as well. I would assume it’s legal to reason about aliasing (for value forwarding, etc…), but not to reason about the semantics of the exact load used. (i.e. one could perform a runtime manipulation of the loaded value, the other might not) Philip