LLVM EH tables much larger than GCC's

Hi,

I’m investigating the size of Clang’s generated binaries relative to GCC, when targeting Android, and I’ve noticed that Clang’s exception tables are much larger – the .ARM.extab section is about 2.5 times as large in two examples.

I noticed a couple of differences between Clang and GCC:

  1. ULEB128 encoding.

In the call site table, GCC encodes offsets using a ULEB128 variable-length encoding, whereas LLVM uses fixed-size 32-bit values (udata4).

Switching to ULEB128 is a large size improvement. For instance, I’m currently playing with Qt5, and the libQt5Core.so binary is about 4MB with a 215KB .ARM.extab section. Switching the encoding cuts the .ARM.extab section down to 100KB for an overall file size reduction of about 2.9%.

There is a complication with ULEB128: there is a ULEB128 field near the front of the EH table pointing to aligned typeinfo values near the end of the table. It can look like this:

.uleb128 (.ttbase - .start) # offset to aligned type infos
.start:

.balign 4

.long _ZTIi # type info for int
.ttbase:

An assembler might start by assuming that the .uleb128 directive occupies only 1 byte, then calculate (.ttbase - .start) as 0x80, which requires 2 bytes. Increasing the .uleb128 directive to 2 bytes could reduce the (.ttbase - .start) difference to 0x7F, though, which can be represented with a 1-byte uleb128. (The LLVM assembler apparently alternates between these two encodings forever – https://bugs.llvm.org/show_bug.cgi?id=35809.)

The EHStreamer::emitExceptionTable code currently avoids this complication by calculating the size of everything in the EH table before-hand, and then aligning type infos using extra-large ULEB128 values (e.g. by encoding 0x20 as A0 80 00). Calculating the size of the EH table like this, though, requires using udata4 for code offsets, because we don’t know how large the code offsets are until we’ve finished assembling the object file.

Here’s the comment from EHStreamer::emitExceptionTable describing the problem:

The type infos need to be aligned. GCC does this by inserting padding just
before the type infos. However, this changes the size of the exception
table, so you need to take this into account when you output the exception
table size. However, the size is output using a variable length encoding.
So by increasing the size by inserting padding, you may increase the number
of bytes used for writing the size. If it increases, say by one byte, then
you now need to output one less byte of padding to get the type infos
aligned. However this decreases the size of the exception table. This
changes the value you have to output for the exception table size. Due to
the variable length encoding, the number of bytes used for writing the
length may decrease. If so, you then have to increase the amount of
padding. And so on. If you look carefully at the GCC code you will see that
it indeed does this in a loop, going on and on until the values stabilize.
We chose another solution: don’t output padding inside the table like GCC
does, instead output it before the table.

I have patches to LLVM that switch the encoding over to ULEB128. Generally, they make LLVM behave like GCC. Specifically they:

  • Output .uleb128 label differences for (a) the type table base offset, (b) the size of the call site table, and (c) code offsets in the call site table

  • Align the type table by inserting padding before them.

  • Guarantee LLVM assembler termination by never shrinking an LEB fragment (and using extra-large LEB encodings). This patch is trivial; it’s posted on the LLVM bugzilla. It doesn’t make the LLVM assembler behave the same way as the GNU assembler, though – I don’t understand what GNU’s assembler is doing.

The complication is dealt with in the assembler, via the general relaxation system, and in LLVM’s case, the sizes of the LEB128 fragments should stabilize very quickly (in one or two iterations). I’m not sure what other assemblers do, but I’m inclined to think the change is still OK – we’re simply matching what GCC does.

Is there another problem I’m not seeing? Has anyone else noticed this size difference?

  1. Termination landing pads.

Clang sometimes uses a landing pad that calls __clang_call_terminate to terminate the program. GCC instead leaves a gap in the call site table, and the personality routine calls std::terminate. For the 4MB libQt5Core.so sample I’m looking at, I think it’d reduce the size of .text and .ARM.extab by maybe 7000 bytes (about 0.18%). (I see about 500 calls to __clang_call_terminate, and I estimate 14 bytes per call, assuming the call site table is using ULEB128 already.)

I tried to implement this in LLVM, but couldn’t find a good way to represent the calls that must be omitted from the call site table.

Is there a reason LLVM doesn’t handle this like GCC?

Examples:

Thanks,
-Ryan

See also my comment and Reid's reply here:
http://lists.llvm.org/pipermail/llvm-dev/2017-February/109995.html

2. *Termination landing pads.*

Clang sometimes uses a landing pad that calls __clang_call_terminate to
terminate the program. GCC instead leaves a gap in the call site table,
and the personality routine calls std::terminate. For the 4MB
libQt5Core.so sample I'm looking at, I think it'd reduce the size of .text
and .ARM.extab by maybe 7000 bytes (about 0.18%). (I see about 500 calls to
__clang_call_terminate, and I estimate 14 bytes per call, assuming the call
site table is using ULEB128 already.)

I tried to implement this in LLVM, but couldn't find a good way to
represent the calls that must be omitted from the call site table.

Is there a reason LLVM doesn't handle this like GCC?

Examples:
- C++03: Compiler Explorer
- C++11: Compiler Explorer

See also my comment and Reid's reply here:
http://lists.llvm.org/pipermail/llvm-dev/2017-February/109995.html

Reid Kleckner wrote:

... I would say that we should just
pattern match away our calls to std::terminate in the backend and emit the
more compact tables, but that is actually a behavior change. It will cause
cleanups between the thrown exception and the noexcept function to stop
running. ...

It seems that some unwinders still run cleanups when the termination
landing pad is omitted?

e.g. For this sample code, GCC emits an empty call site table for func3:

#include <stdio.h>
struct A { ~A() { fprintf(stderr, "~A\n"); } };
void func1() { throw 0; }
void func2() { A a; func1(); }
void func3() noexcept { func2(); }
int main() { func3(); }

Compiling for either x86 or x86_64 Ubuntu, using g++ with
libstdc++/libsupc++, the program still calls ~A. On the other hand, ~A
isn't called if I build with g++ and libc++/libc++abi.

I think I'm not motivated enough to work on changing this LLVM behavior.
The ULEB128 encoding change is more interesting to me. Right now, I think
the biggest problem with that change is the existence of assemblers that
can't cope with the GCC-style EH table assembly. This includes the current
/usr/bin/as on macOS, which suffers from the LLVM runs-forever bug.

-Ryan

Compiling with g++6 and linking with libc++/libcxxrt also runs the destructor, so this looks like a libc++abi bug.

David