Improve hot cold splitting to aggressively outline small blocks

Hello,
I am Ruijie Fang, a GSoC student working on "Improve hot cold
splitting to aggressively outline small blocks." Over the course of
last week, I met with my mentor and co-mentor, Aditya Kumar, and
Rodrigo Rocha, and we made a preliminary plan on improving the
existing hot/cold splitting pass in LLVM through identifying patterns
of cold blocks in real-world workloads via block frequency information
(We have settled to use the PostgreSQL codebase as a workload first,
although if time permits, we will also target other large codebases).

Our project will involve identifying new cold block patterns via
static analysis in our workload, implementing detection of these
patterns into the existing hot/cold splitting pass, and then
benchmarking hot/cold splitting in our workload to see if there are
improvements. Our eventual goal is to improve the ability of hot/cold
analysis to detect cold blocks in these real-world workloads.

Our plan is attached at
https://docs.google.com/document/d/1rGLcFpfVXnF7aS31dWnowd2y_BjJnRA-hj3cUt6MqZ8/edit?usp=sharing.

Any feedback, input, or suggestion is welcome and highly appreciated!

Best regards,
Ruijie

Ruijie Fang
Email: ruijief@princeton.edu

Hello,
I am Ruijie Fang, a GSoC student working on “Improve hot cold
splitting to aggressively outline small blocks.” Over the course of
last week, I met with my mentor and co-mentor, Aditya Kumar, and
Rodrigo Rocha, and we made a preliminary plan on improving the
existing hot/cold splitting pass in LLVM through identifying patterns
of cold blocks in real-world workloads via block frequency information
(We have settled to use the PostgreSQL codebase as a workload first,
although if time permits, we will also target other large codebases).

Our project will involve identifying new cold block patterns via
static analysis in our workload, implementing detection of these
patterns into the existing hot/cold splitting pass, and then
benchmarking hot/cold splitting in our workload to see if there are
improvements. Our eventual goal is to improve the ability of hot/cold
analysis to detect cold blocks in these real-world workloads.

Hi Ruijie,

Thanks for the info!

I skimmed the doc (suggest including it inline in the thread). It wasn’t clear to me if the main goal is to improve PGO based HCS or non-PGO based HCS. It sounds like you are going to be focusing on non-PGO based HCS given the comments about static analysis and detection of throws, asserts etc. A couple of suggestions. I’d focus first on ensuring best performance possible given PGO information (the last time I tried HCS with PGO it wasn’t improving performance for one of our large apps). Second, for the non-PGO case, rather than building in the detection of likely cold blocks into HCS itself, it would be better to drive static generation of some kind of profile metadata for likely cold blocks (a la __builtin_expect). This will be more general and allow passes other than HCS to benefit.

Teresa

Hi Teresa,

Thank you for your reply! I discussed this with Aditya and Rodrigo today about this. We will always have PGO turned on for our benchmark, (i.e. we assume the profiling information is always available). In terms of the workload we supply to PGO: For postgresql, I suggested we use the “pgbench” benchmark, a TPC-B-based SQL benchmark for postgres, to supply profiling information for PGO. We can use other workloads/benchmarks should you have any other suggestions about this.

Thank you,
Ruijie

Hello Ruijie,

One other workload that would be interesting to test might be clang itself. Building clang with PGO information is a common trick for improving compiler performance and it’s well supported in the build system.

Thanks for working on this.

Tobias.

Hello Tobias,

Thank you for the suggestion! Aditya also mentioned this. I will look into it.

Best regards,
Ruijie

Hi Ruijie, Aditya

This is really interesting work! Since you mention that the baseline is a PGO optimized build, can you elaborate on the motivation? Is it because PGO instrumentation leads to additional early stage code changes which may increase the opportunity? Perhaps Context Sensitive PGO can be useful here (see https://reviews.llvm.org/D54175). It introduces an additional round of profiling which should provide more precise information on cold blocks.

Hi Snehasish,

I will attempt a reply here, and Aditya can add more on this, as I’m a complete newcomer to PGO.

Yes, I think the objective is that we can take advantage of the PGO information in one way or the other to optimize for performance — for instance, previous papers 1 2 on HCS have all taken the profiling-based approach to optimize for icache misses. The bottom line is PGO certainly provides helpful information on identifying hot functions to optimize, and we would like to account for that information (at least, not cause significant performance regressions).

Thanks!
Ruijie

Hi Ruijie,

I have a question. How does HotColdSplitting pass differ from PartialInlining pass that can also make use of PGO information? I added support a few years ago in PartialInlining to outline cold blocks/regions, ensuring only hot/warm regions get inlined into the caller.

Is the difference simply that HotColdSplitting will not inline hot code into the caller? If so, I think there’s some synergy between the two passes and should consider refactoring so we don’t have duplication.

Cheers,

Graham Yiu

Compiler Software Engineer

Toronto Heterogeneous Compiler Lab

Huawei Technologies Canada