[LLD] Relocation overflows and .nv_fatbin

Hello

I am seeing relocation overflows from .text section in to .nv_fatbin. The whole thing, nv_fatbin, is a bit of a black box, but there does appear to be only one. We have a downstream patch in LLD, that moves .nv_fatbin section(s) that have relocations in to to the “top”. Looking around at what’s in .nv_fartbin the rest of the code should be bunch of cuda stuff. So, in theory that can grow, and we shouldn’t get any more relocation overflows. At least due to the size of .nv_fatbin.

I was wondering if there is a better way of doing it. Maybe with a linker script? I investigated it, and that answer seems to be no, but I am not an expert in linker scripts.

Thank You
Alex

I implemented INSERT [AFTER|BEFORE] for orphan sections in ⚙ D74375 [ELF] Support INSERT [AFTER|BEFORE] for orphan sections
You may consider moving .nv* and __nv* sections after .bss

But linker synthesized etext/_etext may be in a weird position.
To fix that, use the OVERWRITE_SECTIONS feature I added for LLD 13.0.0

OVERWRITE_SECTIONS {
   .tdata : { etext = .; _etext = .; *(.tdata) }
}

Sorry I wasn’t clear. I am not talking about moving .nv_fatbin in its entirety. Although we are also doing it with linker script INSERT AFTER .bss.
I was referring to re-arranging input sections within the .nv_fatbin output section.

So, we have in .text*
relocation into .nv_fatbin input section from foo3.o

In output .nv_fatbin without any changes we will have
foo1.o input section (some cuda code)
foo2.o input section (some cuda code)
foo3.o input section (has relocation in to)

With this layout we get relocation overflow.

if we shuffle things
foo3.o (has relocation in to from .text*)
foo1.o (some cuda code)
foo2.o (some cuda code)

It shortens the distance from src to dst fo relocation, and all other cuda sections can grow.

Hopefully, this clarifies things.
Alex

Sorry I wasn't clear. I am not talking about moving .nv_fatbin in its entirety. Although we are also doing it with linker script INSERT AFTER .bss.
I was referring to re-arranging input sections within the .nv_fatbin output section.

So, we have in .text*
relocation into .nv_fatbin input section from foo3.o

In output .nv_fatbin without any changes we will have
foo1.o input section (some cuda code)
foo2.o input section (some cuda code)
foo3.o input section (has relocation in to)

With this layout we get relocation overflow.

if we shuffle things
foo3.o (has relocation in to from .text*)
foo1.o (some cuda code)
foo2.o (some cuda code)

It shortens the distance from src to dst fo relocation, and all other cuda sections can grow.

Hopefully, this clarifies things.
Alex

Reordering output sections is much more effective.

The ELF port has an option --symbol-ordering-file which reorders input
sections within one output section, but I doubt you can find something
as a marker.
The option was originally conceived to improve performance (by
optimizing for instruction cache/iTLB locality), not to mitigate
relocation overflows.
(macOS ld64 has a similar but more powerful -order_file which can
specify input filenames.)

Reordering input sections automatically has some small value but it
would break phase ordering and cause more maintenance burden. So I
very strongly object to that.

Reordering output sections is much more effective.

If you have a huge .nv_fatbin, without moving input sections, you may end up with relocation at the beginning as well as at the end, then no matter where you put .nv_fatbin relative to .text (before or after), max relocation distance will grow when size of .nv_fatbin grows.

So reordering output section alone isn’t enough.

Reordering input sections automatically has some small value but it
would break phase ordering and cause more maintenance burden.

Could you elaborate on phase ordering?

IMO, if your image has grown large enough to cause reloc overflows, rearranging location of GPU binaries or rearranging the objects inside of .nv_fatbin would only give you marginal benefits. You may be able to shuffle things around enough to avoid the issue for the time being, but it will not change the fact that the executable is too large and the overflow will come back sooner or later, as binaries tend to grow over time.

I would suggest considering reducing the executable size instead:

  • use nvprune to remove GPU binaries you do not need. CUDA libraries come with GPU binaries for all major GPU variants and that’s a lot of GPU code. If you’re only interested in one of those GPUs, Use nvprune to keep GPU blobs only for your GPU and that will reduce the executable size a lot.
  • Link with CUDA libraries dynamically. This also avoids the executable relocation issues, but adds runtime dependencies, which may be an issue in some cases.
  • If most of GPU code comes from the sources you compile yourself, then you can try enabling GPU image compression with -Xcuda-fatbinary --compress-all This assumes you’re compiling with clang, but I think NVCC sholud have a similar way to pass an option to fatbinary.

While neither of these workarounds solves the issue, they do tend to provide sufficient relief in most cases.

–Artem

Thanks for suggestions Artem.

Can you clarify on “You may be able to shuffle things around enough to avoid the issue for the time being, but it will not change the fact that the executable is too large and the overflow will come back sooner or later, as binaries tend to grow over time.”

As far as I can tell there is only one relocation from .text in to a cuda code. Some kind of cuda runtime. If the rest of the code is just GPU code, and .nv_fatbin is after .bss then seems like .nv_fatbin can continue to grow. I guess .text and other sections can grow and eventually yes we will hit relocation overflows because of that.

Alex

Thanks for suggestions Artem.

Can you clarify on “You may be able to shuffle things around enough to avoid the issue for the time being, but it will not change the fact that the executable is too large and the overflow will come back sooner or later, as binaries tend to grow over time.”

As far as I can tell there is only one relocation from .text in to a cuda code.

This is an implementation detail. There’s no guarantee that it will be the case for everyone. It’s just data. Nothing stops me from writing the code to access some GPU binaries directly. I believe some of the CUDA libraries do so.

Some kind of cuda runtime. If the rest of the code is just GPU code, and .nv_fatbin is after .bss then seems like .nv_fatbin can continue to grow.

It just happens to end up there as yet another data section. It is not expected to grow at runtime. AFAICT the GPU binaries are placed in a special section so various CUDA tools can find them. E.g. cuobjdump. Renaming the section or moving it around will not affect the functionality of the application itself.

I guess .text and other sections can grow and eventually yes we will hit relocation overflows because of that.

It all will depend on the specifics of what gets linked into your executable.

In general, if the sum total of your coda and data is more than 2GB, the possibility for the overflow is there. For an executable that large, it’s impractical-to-impossible to guarantee that code X that accesses data Y are close enough. You can often do it in a specific case, but not in general. You do not control what ends up in .nv_fatbin and you do not have control over who/where/how accesses it. Moving fatbin to the top would move it out of the way of relocs between .text and regular data, but you’re still open to overflows between the end of .text and .nv_fatbinary, if they are large enough.

In other words, relocating GPU blobs to the top may provide a benefit, but it’s not a complete solution. We’ve tried that already internally. One example – it is not sufficient to avoid overflows in a tensorflow application with all needed CUDA libraries statically linked in.

BTW, I did attempt to move nv_fatbin upwards before. We’ve concluded that it wasn’t worth it then: https://reviews.llvm.org/D47396

–Artem