[LLD] Adding WebAssembly support to lld

Hi llvmers,

As you may know, work has been progressing on the experimental
WebAssembly backend in llvm. However, there is currently not a good
linking story. Most the of existing linking strategies (i.e. those in
the emscripten toolchain) involve bitcode linking and whole program
compilation at link time.

To improve this situation I've been working on adding a wasm backend
for lld. My current work is here: ⚙ D34851 [WebAssembly] Initial wasm linker implementation

Although this port is not ready for production use (its missing
several key features such as comdat support and full support for weak
aliases) its already getting a some testing on the wasm waterfall:
https://wasm-stat.us/builders/linux

I'm hopeful that my patch may now be at an MVP stage that could be
considered for merging into upstream lld. Thoughts? LLD maintainers,
would you support the addition of a new backend?

cheers,
sam

Can you link to docs about the wasm object format? (both relocatable and executable)

Also, traditional object file linkers are primarily concerned with concatenating binary blobs with small amount of patching of said binary blobs based on computed virtual (memory) addresses. Or perhaps to put it another way, what traditional object file linkers do is construct program images meant to be mapped directly into memory.

My understanding is that wasm is pretty different from this (though “linker frontend” things like the symbol resolution process is presumably similar). Looking at Writer::run in your patch it seems like wasm is indeed very different. E.g. the linker is aware of things like “types” and knowing internal structure of functions (e.g. write_sig knows about how many parameters a function has)

Can you elaborate on semantically what the linker is actually doing for wasm?

– Sean Silva

Hi Sam,

First, I want to know the symbol resolution semantics. I can imagine that that is set in stone yet, but just that you guys are still discussing what would be the best semantics or file format for the linkable wasm object file. I think by knowing more about the format and semantics, we can give you guys valuable feedback, as we’ve been actively working on the linker for a few years now. (And we know a lot of issues in existing object file format, so I don’t want you guys to copy these failures.)

As Sean pointed out, this looks very different from ELF or COFF in object construction. Does this mean the linker has to reconstruct everything? The ELF and COFF linkers are multi-threaded, as each thread can work on different sections simultaneously when writing to an output file. I wonder if it’s still doable in wasm.

Also, I wonder if there’s a way to parallelize symbol resolution. Since there’s no linkable wasm programs, we can take a radical approach.

Have you ever considered making the file format more efficiently than ELF or COFF so that they are linked really fast? For example, in order to avoid a lot of (possibly very long due to name mangling) symbols, you could store SHA hashes or something so that linkers are able to handle symbols as an array of fixed-size elements.

That is just an example. There are a lot of possible improvements we can make for a completely new file format.

Hi Sam,

First, I want to know the symbol resolution semantics. I can imagine that
that is set in stone yet, but just that you guys are still discussing what
would be the best semantics or file format for the linkable wasm object
file. I think by knowing more about the format and semantics, we can give
you guys valuable feedback, as we've been actively working on the linker
for a few years now. (And we know a lot of issues in existing object file
format, so I don't want you guys to copy these failures.)

As Sean pointed out, this looks very different from ELF or COFF in object
construction. Does this mean the linker has to reconstruct everything? The
ELF and COFF linkers are multi-threaded, as each thread can work on
different sections simultaneously when writing to an output file. I wonder
if it's still doable in wasm.

Also, I wonder if there's a way to parallelize symbol resolution. Since
there's no linkable wasm programs, we can take a radical approach.

Another question for Sam is how many "innovation tokens" the wasm folks are
prepared to burn on the object format. E.g. do they not really care as long
as it works, or are they willing to invest significant time in designing
and optimizing it.

There's a lot of interesting directions when creating a new object format
(this was actually one of the initial goals of LLD, way back at the
project's inception!). There are lots of ideas but very little has actually
been explored even to the point of knowing that making X change will give
Y% speedup. So most (all?) of these things are definitely "research" type
work.

Also, looking at LLD's profile, there actually aren't really many things
that immediately stand out as major (order of magnitude) improvements that
are possible. The only obvious major thing that sticks out to me would be
that if the relocations don't affect "layout" (or the wasm equivalent; e.g.
don't require allocating bss or GOT entries), then we do only a single scan
over relocations, which is about 30% of the current

Ultimately we will be limited by disk IO (and if I remember from Rui's
presentation, we're only like 4x slower than `cp`) as long as we don't go
to a model that allows us to transcend writing the output file to disk in
the critical path.

Have you ever considered making the file format more efficiently than ELF
or COFF so that they are linked really fast? For example, in order to avoid
a lot of (possibly very long due to name mangling) symbols, you could store
SHA hashes or something so that linkers are able to handle symbols as an
array of fixed-size elements.

For Sam's benefit, this is something that we've been thinking about for a
while, but we don't really know how much speedup it will really give. (and
IIRC Chandler said at one point that something like this had been tried at
google but the extra per-TU time didn't pay off in the link time or
something like that).

Also, strings are only a big bottleneck (last I checked the profile) in
debug info builds and we already know that that's just a fundamental
problem due to split dwarf not being ubiquitous for ELF at this point in
time.

That is just an example. There are a lot of possible improvements we can
make for a completely new file format.

I guess one thing that would be good to clarify is the design goal of this
wasm linker. (and how interested you are in changing the format at this
point)
If I had to guess, I would guess that ideally the wasm linker would be a
drop-in replacement for a standard native linker, so that changes to user
build systems is minimal.

E.g. the linker invocation would want to stay `ld main.o libfoo.a libbar.a
...` just like in the corresponding native link. (although how does wasm
handle ar? for LLVM bitcode in LTO, that's always a stumbling block)

Sam, could you clarify?

If symbol resolution, "input section" selection, and archive semantics
aren't close to that of native linkers, then it would make it difficult to
port existing C/C++ apps (every app has e.g. an __attribute__((weak)) in
there somewhere, or a C++ inline function so comdat/linkonce is needed,
etc.).

-- Sean Silva

Can you link to docs about the wasm object format? (both relocatable and
executable)

The executable format is described here:

The relocatable format that the 'wasm32-unknown-unknown-wam' llvm
target currently emits (and this lld port currently accepts) is still
a work in progress and is (probably somewhat incompletely) described
here:

Also, traditional object file linkers are primarily concerned with
concatenating binary blobs with small amount of patching of said binary
blobs based on computed virtual (memory) addresses. Or perhaps to put it
another way, what traditional object file linkers do is construct program
images meant to be mapped directly into memory.

My understanding is that wasm is pretty different from this (though "linker
frontend" things like the symbol resolution process is presumably similar).
Looking at Writer::run in your patch it seems like wasm is indeed very
different. E.g. the linker is aware of things like "types" and knowing
internal structure of functions (e.g. write_sig knows about how many
parameters a function has)

Can you elaborate on semantically what the linker is actually doing for
wasm?

You are correct that the wasm linker does have more work to do than a
traditional linker. There are more sections that the linker will need
to re-construct fully. This is because there is more high level
information required in the wasm format. For example, as you point
out, the type of each function. Functions also live in their own
index space outside of the program's memory space. This means that
the simple approach of traditional linkers where almost everything can
be boiled down to virtual addresses don't make as much sense here.
This is part of the reason why early attempts to use ELF as the
encapsulation format were abandoned: wasm is different enough that is
didn't make sense.

Having said that, we've tried to ensure that the code and data
sections can be blindly concatenated (modulo relocations), and that
should allow for the some of the multi-threaded optimizations in lld
to be leveraged.

Hi Sam,

First, I want to know the symbol resolution semantics. I can imagine that
that is set in stone yet, but just that you guys are still discussing what
would be the best semantics or file format for the linkable wasm object
file. I think by knowing more about the format and semantics, we can give
you guys valuable feedback, as we've been actively working on the linker for
a few years now. (And we know a lot of issues in existing object file
format, so I don't want you guys to copy these failures.)

I've been aiming the match the semantics of native linkers in or order
to minimize porting efforts and allow existing software and build
systems to target WebAssembly.

Specifically, I'm trying to match what the ELF port of lld does in
terms of loading archives, objects and symbols including the
resolution of weak symbols. Indeed, you can see that I borrow a large
amount of the SymbolTable code. Like the other lld ports it does not
use the left-to-right-only strategy of symbol resolution.

As Sean pointed out, this looks very different from ELF or COFF in object
construction. Does this mean the linker has to reconstruct everything? The
ELF and COFF linkers are multi-threaded, as each thread can work on
different sections simultaneously when writing to an output file. I wonder
if it's still doable in wasm.

It should be doable for the data and code sections. For the other
section types (of which there are at least 8), we will most likely be
forced to reconstruct them fully and I'm not sure if that will be
parallelizable in the same say. I'm hoping that doing code and data
and relocations in parallel will still be a big win.

Also, I wonder if there's a way to parallelize symbol resolution. Since
there's no linkable wasm programs, we can take a radical approach.

Have you ever considered making the file format more efficiently than ELF or
COFF so that they are linked really fast? For example, in order to avoid a
lot of (possibly very long due to name mangling) symbols, you could store
SHA hashes or something so that linkers are able to handle symbols as an
array of fixed-size elements.

That is just an example. There are a lot of possible improvements we can
make for a completely new file format.

At this point I've mostly been focused on producing a working linker
that matches the semantics of existing native linkers. My short goal
is to provide something that can replace the current bitcode linking
solution. Perhaps I'm aiming too low :slight_smile:

Sam Clegg via llvm-dev <llvm-dev@lists.llvm.org> writes:

Can you elaborate on semantically what the linker is actually doing for
wasm?

You are correct that the wasm linker does have more work to do than a
traditional linker. There are more sections that the linker will need
to re-construct fully. This is because there is more high level
information required in the wasm format. For example, as you point
out, the type of each function. Functions also live in their own
index space outside of the program's memory space. This means that
the simple approach of traditional linkers where almost everything can
be boiled down to virtual addresses don't make as much sense here.
This is part of the reason why early attempts to use ELF as the
encapsulation format were abandoned: wasm is different enough that is
didn't make sense.

BTW, is that summarized somewhere?

I remember the discussion about having relocations that would resolve to
function numbers, but I don't remember where that failed.

Cheers,
Rafael

(one option to allow faster progress without solving all issues in the linker space up-front, especially when starting with an implementation in LLD anyway, would be to make the format only compatible with lld and potentially only compatible with a matching lld version (I don’t in general like this idea, and for example, pushed back recently on work related to linker ODR checking that would create such a dependence for that feature - but I think as a short/intermediate term, with the goal to working towards a more stable/portable format it might be worth considering) - that way there’s no baked in, backwards compatible format, and you can iterate, fix issues in the format that lead to performance issues in the linker, etc, and then when it’s baked enough, declare it to be the format that shall be supported/portable/etc)

Sam Clegg via llvm-dev <llvm-dev@lists.llvm.org> writes:

>> Can you elaborate on semantically what the linker is actually doing for
>> wasm?
>
> You are correct that the wasm linker does have more work to do than a
> traditional linker. There are more sections that the linker will need
> to re-construct fully. This is because there is more high level
> information required in the wasm format. For example, as you point
> out, the type of each function. Functions also live in their own
> index space outside of the program's memory space. This means that
> the simple approach of traditional linkers where almost everything can
> be boiled down to virtual addresses don't make as much sense here.
> This is part of the reason why early attempts to use ELF as the
> encapsulation format were abandoned: wasm is different enough that is
> didn't make sense.

BTW, is that summarized somewhere?

I remember the discussion about having relocations that would resolve to
function numbers, but I don't remember where that failed.

Looking at Sam's patch, it seems like that's what it does, e.g.
R_WEBASSEMBLY_FUNCTION_INDEX_LEB

-- Sean Silva

Sean Silva <chisophugis@gmail.com> writes:

The standard executable WebAssembly format does not use ELF, for numerous
reasons, most visibly that ELF is designed for sparse decoding -- headers
contain offsets to arbitrary points in the file, while WebAssembly's format
is designed for streaming decoding. Also, as Sam mentioned, there are a lot
of conceptual differences. In ELF, virtual addresses are a pervasive
organizing principle; in WebAssembly, it's possible to think about various
index spaces as virtual address spaces, but not all address-space-oriented
assumptions apply.

It would also be possible for WebAssembly to use ELF ET_REL files just for
linking, however telling LLVM and other tools to target ELF tends to lead
them to assume that the final output is ELF and rely on ELF-specific
features.

The problems are solvable, but particularly for the long term, it seems
that WebAssembly would be better off using a format that fits how it
actually works, rather than using a format built around abstractions that
don't fit, even if they can be emulated.

Dan

Dan Gohman <sunfish@mozilla.com> writes:

Sorry, I meant why that didn't work with ELF (or what else didn't).

The standard executable WebAssembly format does not use ELF, for numerous
reasons, most visibly that ELF is designed for sparse decoding -- headers
contain offsets to arbitrary points in the file, while WebAssembly's format
is designed for streaming decoding. Also, as Sam mentioned, there are a lot
of conceptual differences. In ELF, virtual addresses are a pervasive
organizing principle; in WebAssembly, it's possible to think about various
index spaces as virtual address spaces, but not all address-space-oriented
assumptions apply.

I can see why you would want your own format for distribution. My
question was really about using ELF for the .o files.

It would also be possible for WebAssembly to use ELF ET_REL files just for
linking, however telling LLVM and other tools to target ELF tends to lead
them to assume that the final output is ELF and rely on ELF-specific
features.

Things like "the dynamic linker implements copy relocations"?

Cheers,
Rafael

Sorry for the belated response. I was on vacation last week. A couple of thoughts on this patch and the story of webassembly linking.

  • This patch adds a wasm support as yet another major architecture besides ELF and COFF. That is fine and actually aligned to the design principle of the current lld. Wasm is probably more different than ELF against COFF, and the reason why we separated COFF and ELF was because they are different enough that it is easier to handle them separately rather than writing a complex compatibility layer for the two. So that is I think the right design chocie. That being said, some files are unnecessarily copied to all targets. Particularly, Error.{cpp,h} and Memory.{h,cpp} need to be merged because they are mostly identical.

  • I can imagine that you would eventually want to support two modes of wasm object files. In one form, object files are represented in the compact format using LEB128 encoding, and the linker has to decode and re-encode LEB128 instruction streams. In the other form, they are still in LEB128 but uses full 5 bytes for 4-byte numbers, so that you can just concatenate them without decoding/re-encoding. Which mode do you want to make default? The latter should be much faster than the former (or the former is probably unnecessarily slow), and because the regular compile-link-run cycle is very important for developers, I’d guess that making the latter default is a reasonable choice, although this patch implements the former. What do you think about it?

  • Storing the length and a hash value for each symbol in the symbol table may speed up linking. We’ve learned that finding terminating NULs and computing hash values for symbols is time-consuming process in the linker.

That's a good example. Copy relocations are a way to avoid writing to the
pages of the text segment of the main executable that's typically been
mmapped from a file. WebAssembly doesn't mmap text segments into the
program in the first place, so ELF code that thinks it knows things about
ELF-style copy relocations doesn't apply.

Dan

Sorry for the belated response. I was on vacation last week. A couple of
thoughts on this patch and the story of webassembly linking.

And I'm about to be on (mostly) vacation for next 3 weeks :slight_smile:

- This patch adds a wasm support as yet another major architecture besides
ELF and COFF. That is fine and actually aligned to the design principle of
the current lld. Wasm is probably more different than ELF against COFF, and
the reason why we separated COFF and ELF was because they are different
enough that it is easier to handle them separately rather than writing a
complex compatibility layer for the two. So that is I think the right design
chocie. That being said, some files are unnecessarily copied to all targets.
Particularly, Error.{cpp,h} and Memory.{h,cpp} need to be merged because
they are mostly identical.

I concur. However, would you accept the wasm port landing first, and
then factoring some kind of library out of the 3 backends after that?
Personally I would prefer to land the initial version without
touching the ELF/COFF backends and refactor in a second pass.

- I can imagine that you would eventually want to support two modes of wasm
object files. In one form, object files are represented in the compact
format using LEB128 encoding, and the linker has to decode and re-encode
LEB128 instruction streams. In the other form, they are still in LEB128 but
uses full 5 bytes for 4-byte numbers, so that you can just concatenate them
without decoding/re-encoding. Which mode do you want to make default? The
latter should be much faster than the former (or the former is probably
unnecessarily slow), and because the regular compile-link-run cycle is very
important for developers, I'd guess that making the latter default is a
reasonable choice, although this patch implements the former. What do you
think about it?

Yes, currently relocatable wasm files (as produced by clang) use fixed
width LEB128 (padded to five bytes) for any relocation targets. This
allows the linker to trivially apply relocations and blindly
concatenate data a code sections. We specifically avoid any
instruction decoding in the linker. The plan is to add a optional
pass over the generated code section of an executable file to compress
the relocation targets to their normal LEB128 size. Whether or not to
make this the default is TBD.

- Storing the length and a hash value for each symbol in the symbol table
may speed up linking. We've learned that finding terminating NULs and
computing hash values for symbols is time-consuming process in the linker.

Yes, I imagine we could even share some of the core symbol table code
via the above mentioned library?

> Sorry for the belated response. I was on vacation last week. A couple of
> thoughts on this patch and the story of webassembly linking.

And I'm about to be on (mostly) vacation for next 3 weeks :slight_smile:

>
> - This patch adds a wasm support as yet another major architecture
besides
> ELF and COFF. That is fine and actually aligned to the design principle
of
> the current lld. Wasm is probably more different than ELF against COFF,
and
> the reason why we separated COFF and ELF was because they are different
> enough that it is easier to handle them separately rather than writing a
> complex compatibility layer for the two. So that is I think the right
design
> chocie. That being said, some files are unnecessarily copied to all
targets.
> Particularly, Error.{cpp,h} and Memory.{h,cpp} need to be merged because
> they are mostly identical.

I concur. However, would you accept the wasm port landing first, and
then factoring some kind of library out of the 3 backends after that?
Personally I would prefer to land the initial version without
touching the ELF/COFF backends and refactor in a second pass.

Yes, we can do that later.

- I can imagine that you would eventually want to support two modes of
wasm
> object files. In one form, object files are represented in the compact
> format using LEB128 encoding, and the linker has to decode and re-encode
> LEB128 instruction streams. In the other form, they are still in LEB128
but
> uses full 5 bytes for 4-byte numbers, so that you can just concatenate
them
> without decoding/re-encoding. Which mode do you want to make default? The
> latter should be much faster than the former (or the former is probably
> unnecessarily slow), and because the regular compile-link-run cycle is
very
> important for developers, I'd guess that making the latter default is a
> reasonable choice, although this patch implements the former. What do you
> think about it?

Yes, currently relocatable wasm files (as produced by clang) use fixed
width LEB128 (padded to five bytes) for any relocation targets. This
allows the linker to trivially apply relocations and blindly
concatenate data a code sections. We specifically avoid any
instruction decoding in the linker. The plan is to add a optional
pass over the generated code section of an executable file to compress
the relocation targets to their normal LEB128 size. Whether or not to
make this the default is TBD.

Does this strategy make sense?

- make compilers always emit fixed-width LEB128, so that linkers can link
them just by concatenating them and applying relocations,
- make the linker emit fixed-width LEB128 by default as well, so that it
can create executables as fast as it can just, and
- write an optional re-encoder which decodes and re-encodes fixed-width
LEB128 to "compress" the final output.

The third one can be an internal linker pass which is invoked when you pass
-O1 or something to the linker, but conceptually it is separated from the
"main" linker.

The rationale behind this strategy is that

- Developers usually want to create outputs as fast as linkers can.
Creating final executables for shipping is (probably by far) less frequent.
I also expect that, if wasm will be successful, you'll be compiling and
linking large programs using wasm as a target (on a successful platform,
people start doing something incredible/crazy in general), so the toolchain
performance will matter. You want to optimize it for regular
compile-link-debug cycle.
- Creating an output just by concatenating input file sections is I believe
easier than decoding and re-encoding LEB128 fields. So I think we want to
construct the linker based on that design, instead of directly emitting
variable-size LEB128 fields.

> Sorry for the belated response. I was on vacation last week. A couple of
> thoughts on this patch and the story of webassembly linking.

And I'm about to be on (mostly) vacation for next 3 weeks :slight_smile:

>
> - This patch adds a wasm support as yet another major architecture
> besides
> ELF and COFF. That is fine and actually aligned to the design principle
> of
> the current lld. Wasm is probably more different than ELF against COFF,
> and
> the reason why we separated COFF and ELF was because they are different
> enough that it is easier to handle them separately rather than writing a
> complex compatibility layer for the two. So that is I think the right
> design
> chocie. That being said, some files are unnecessarily copied to all
> targets.
> Particularly, Error.{cpp,h} and Memory.{h,cpp} need to be merged because
> they are mostly identical.

I concur. However, would you accept the wasm port landing first, and
then factoring some kind of library out of the 3 backends after that?
Personally I would prefer to land the initial version without
touching the ELF/COFF backends and refactor in a second pass.

Yes, we can do that later.

> - I can imagine that you would eventually want to support two modes of
> wasm
> object files. In one form, object files are represented in the compact
> format using LEB128 encoding, and the linker has to decode and re-encode
> LEB128 instruction streams. In the other form, they are still in LEB128
> but
> uses full 5 bytes for 4-byte numbers, so that you can just concatenate
> them
> without decoding/re-encoding. Which mode do you want to make default?
> The
> latter should be much faster than the former (or the former is probably
> unnecessarily slow), and because the regular compile-link-run cycle is
> very
> important for developers, I'd guess that making the latter default is a
> reasonable choice, although this patch implements the former. What do
> you
> think about it?

Yes, currently relocatable wasm files (as produced by clang) use fixed
width LEB128 (padded to five bytes) for any relocation targets. This
allows the linker to trivially apply relocations and blindly
concatenate data a code sections. We specifically avoid any
instruction decoding in the linker. The plan is to add a optional
pass over the generated code section of an executable file to compress
the relocation targets to their normal LEB128 size. Whether or not to
make this the default is TBD.

Does this strategy make sense?

- make compilers always emit fixed-width LEB128, so that linkers can link
them just by concatenating them and applying relocations,
- make the linker emit fixed-width LEB128 by default as well, so that it
can create executables as fast as it can just, and
- write an optional re-encoder which decodes and re-encodes fixed-width
LEB128 to "compress" the final output.

The third one can be an internal linker pass which is invoked when you pass
-O1 or something to the linker, but conceptually it is separated from the
"main" linker.

IIUC that is exactly the strategy I am suggesting. Perhaps my
description of it was less clear. The currently implement does this,
with caveat that the final (optional) compression phase is not yet
implemented :slight_smile:

For the record, I left review comments to https://reviews.llvm.org/D34851.