I’m investigating the size of Clang’s generated binaries relative to GCC, when targeting Android, and I’ve noticed that Clang’s exception tables are much larger – the .ARM.extab section is about 2.5 times as large in two examples.
I noticed a couple of differences between Clang and GCC:
- ULEB128 encoding.
In the call site table, GCC encodes offsets using a ULEB128 variable-length encoding, whereas LLVM uses fixed-size 32-bit values (udata4).
Switching to ULEB128 is a large size improvement. For instance, I’m currently playing with Qt5, and the libQt5Core.so binary is about 4MB with a 215KB .ARM.extab section. Switching the encoding cuts the .ARM.extab section down to 100KB for an overall file size reduction of about 2.9%.
There is a complication with ULEB128: there is a ULEB128 field near the front of the EH table pointing to aligned typeinfo values near the end of the table. It can look like this:
.uleb128 (.ttbase - .start) # offset to aligned type infos
.long _ZTIi # type info for int
An assembler might start by assuming that the .uleb128 directive occupies only 1 byte, then calculate (.ttbase - .start) as 0x80, which requires 2 bytes. Increasing the .uleb128 directive to 2 bytes could reduce the (.ttbase - .start) difference to 0x7F, though, which can be represented with a 1-byte uleb128. (The LLVM assembler apparently alternates between these two encodings forever – https://bugs.llvm.org/show_bug.cgi?id=35809.)
The EHStreamer::emitExceptionTable code currently avoids this complication by calculating the size of everything in the EH table before-hand, and then aligning type infos using extra-large ULEB128 values (e.g. by encoding 0x20 as A0 80 00). Calculating the size of the EH table like this, though, requires using udata4 for code offsets, because we don’t know how large the code offsets are until we’ve finished assembling the object file.
Here’s the comment from EHStreamer::emitExceptionTable describing the problem:
The type infos need to be aligned. GCC does this by inserting padding just
before the type infos. However, this changes the size of the exception
table, so you need to take this into account when you output the exception
table size. However, the size is output using a variable length encoding.
So by increasing the size by inserting padding, you may increase the number
of bytes used for writing the size. If it increases, say by one byte, then
you now need to output one less byte of padding to get the type infos
aligned. However this decreases the size of the exception table. This
changes the value you have to output for the exception table size. Due to
the variable length encoding, the number of bytes used for writing the
length may decrease. If so, you then have to increase the amount of
padding. And so on. If you look carefully at the GCC code you will see that
it indeed does this in a loop, going on and on until the values stabilize.
We chose another solution: don’t output padding inside the table like GCC
does, instead output it before the table.
I have patches to LLVM that switch the encoding over to ULEB128. Generally, they make LLVM behave like GCC. Specifically they:
Output .uleb128 label differences for (a) the type table base offset, (b) the size of the call site table, and (c) code offsets in the call site table
Align the type table by inserting padding before them.
Guarantee LLVM assembler termination by never shrinking an LEB fragment (and using extra-large LEB encodings). This patch is trivial; it’s posted on the LLVM bugzilla. It doesn’t make the LLVM assembler behave the same way as the GNU assembler, though – I don’t understand what GNU’s assembler is doing.
The complication is dealt with in the assembler, via the general relaxation system, and in LLVM’s case, the sizes of the LEB128 fragments should stabilize very quickly (in one or two iterations). I’m not sure what other assemblers do, but I’m inclined to think the change is still OK – we’re simply matching what GCC does.
Is there another problem I’m not seeing? Has anyone else noticed this size difference?
- Termination landing pads.
Clang sometimes uses a landing pad that calls __clang_call_terminate to terminate the program. GCC instead leaves a gap in the call site table, and the personality routine calls std::terminate. For the 4MB libQt5Core.so sample I’m looking at, I think it’d reduce the size of .text and .ARM.extab by maybe 7000 bytes (about 0.18%). (I see about 500 calls to __clang_call_terminate, and I estimate 14 bytes per call, assuming the call site table is using ULEB128 already.)
I tried to implement this in LLVM, but couldn’t find a good way to represent the calls that must be omitted from the call site table.
Is there a reason LLVM doesn’t handle this like GCC?