I’d like to give a status update to the community about the recently-added hot/cold splitting pass. I’ll provide some motivation for the pass, describe its implementation, summarize recent/ongoing work, and share early results.
We (at Apple) have found that memory pressure from resident pages of code is significant on embedded devices. In particular, this pressure spikes during app launches. We’ve been looking into ways to reduce memory pressure. Hot/cold splitting is one part of a solution.
The hot/cold splitting pass identifies cold basic blocks and moves them into separate functions. The linker must order newly-created cold functions away from the rest of the program (say, into a cold section). The idea here is to have these cold pages faulted in relatively infrequently (if at all), and to improve the memory locality of code outside of the cold area.
The pass considers profile data, traps, uses of the
cold** attribute, and exception-handling code to identify cold blocks. If the pass identifies a cold region that’s profitable to extract, it uses LLVM’s CodeExtractor utility to split the region out of its original function. Newly-created cold functions are marked
minsize (-Oz). The splitting process may occur multiple times per function.
The choice to perform splitting at the IR level gave us a lot of flexibility. It allowed us to quickly target different architectures and evaluate new phase orderings. It also made it easier to split out highly complex subgraphs of CFGs (with both live-ins and live-outs). One disadvantage is that we cannot easily split out EH pads (llvm.org/PR39545). However, our experiments show that doing so only increases the total amount of split code by 2% across the entire iOS shared cache.
Aditya and Sebastian contributed the hot/cold splitting pass in September 2018 (r341669). Since then, work on the pass has continued steadily. It gained the ability to extract larger cold regions (r345209), compile-time improvements (r351892, r351894), and a more effective cost model (r352228). With some experimentation, we found that scheduling splitting before inlining gives better code size results without regressing memory locality (r352080). Along the way, CodeExtractor got better at handling debug info (r344545, r346255), and a few other issues in this utility were fixed (r348205, r350420).
At this point, we’re able to build & run our software stack with hot/cold splitting enabled. We’d like to introduce a CC1 option to safely toggle splitting on/off (https://reviews.llvm.org/D57265). That would help experiment with and/or deploy the pass.
On internal memory benchmarks, we consistently saw that code page faults were more concentrated with splitting enabled. With splitting, the set of the most-frequently-accessed 95% (99%) of code pages was 10% (resp. 3.6%) smaller. We used a facility in the xnu VM to force pages to be faulted periodically, and ktrace, to collect this data. We settled on this approach because the alternatives (e.g. directly sampling RSS of various processes) gave unstable results, even when measures were taken to stabilize a device (e.g. disabling dynamic frequency switching, SMP, and various other features).
On arm64, the performance impact of enabling splitting in the LLVM test suite appears to be in the noise. We think this is because split code amount to just 0.1% of all the code in the test suite. Across the iOS shared cache we see that 0.9% of code is split, with higher percentages in key frameworks (e.g. 7% in libdispatch). For three internal benchmarks, we see geomean score improvements of 1.58%, 0.56%, and 0.27% respectively. We think these results are promising. I’d like to encourage others to evaluate the pass and share results.