We have added an option to LLD to not apply relocations when writing out an ELF section and I wonder if you would accept it upstream. Why would we want that I hear you ask?
Suppose we have a weird microcontroller with a 16-bit memory space. Naturally its relocations will be 16 bits (or less). If you try to link a file that is more than 65536 bytes you might get relocations that point beyond 65536 and cannot fit inside a 16-bit relocation. This means that when the section is written out it will give a relocation error and linking will fail. So far so reasonable. But what if your program *almost* fits in 65536 bytes? Wouldn't it be nice to get a binary out that is 65667 bytes (or whatever) even if it is broken. Then you can at least inspect it, e.g. with Bloaty McBloatface to see where you can save memory.
That's where this option helps. It basically just ignores relocations when writing them out so you always get a (broken) binary even if relocations would have prevented it. The patch is really small (see below). Would you guys be willing to accept this, or if not can you suggest an alternative? Things I have tried / people have suggested:
* Use -Map instead. Unfortunately the map file is written after relocations are resolved so this does not work.
* Use --relocatable. This does work, but it doesn't give the same file that you would get if linking had succeeded without it. In particular you can't use it with --gc-sections, which could make a huge difference if you have -ffunction-sections.
* Just pass all your .o files to Bloaty instead of the final binary. Unfortunately this suffers from the same problem - it doesn't account for --gc-sections or other things the linker might do.
This works nicely for us. What do you think?