Capabilities of Clang's PGO (e.g. improving code density)

Hello –

I’m an Engineer in Microsoft Office after looking into possible advantages of using PGO for our Android Applications.

We at Microsoft have deep experience with Visual C++’s Profile Guided Optimization and often see 10% or more reduction in the size of application code loaded after using PGO for key scenarios (e.g. application launch). Making application launch quickly is very important to us, and reducing the number of code pages loaded helps with this goal.

Before we dig into turning it on, I’m wondering if there’s any pre-existing research / case studies about possible code page reduction seen from other Clang PGO-enabled applications? It sounds like there is some possible instrumented run performance problems due to counter contention resulting in sluggish performance and perhaps skewed profile data: https://groups.google.com/forum/#!topic/llvm-dev/cDqYgnxNEhY. I’d like an overview of the optimizations that PGO does, but I don’t find much from looking at the Clang PGO section: http://clang.llvm.org/docs/UsersManual.html#profile-guided-optimization.

For example, from reading different pages on how Clang PGO, it’s unclear if it does “block reordering” (i.e. moving unexecuted code blocks to a distant code page, leaving only ‘hot’ executed code packed together for greater code density). I find mention of “hot arc” optimization (-fprofile-arcs) , but I’m unclear if this is the same thing. Does Clang PGO do block reordering?

Thanks,

–Lee

A small clarification. Clang itself does not implement any
optimizations. Clang limits itself to generate LLVM IR. The
annotated IR is then used by some LLVM optimizers to guide decisions.
At this time, there are few optimization passes that use the profile
information: block reordering and register allocation (to avoid
spilling on cold paths).

There are no other significant transformations that use profiling
information. We are working on that. Notably, we'd like to add
profiling-based decisions to the inliner, loop optimizers and the
vectorizer.

Diego.

Hello –

I’m an Engineer in Microsoft Office after looking into possible advantages
of using PGO for our Android Applications.

We at Microsoft have deep experience with Visual C++’s Profile Guided
Optimization
<https://urldefense.proofpoint.com/v2/url?u=https-3A__msdn.microsoft.com_en-2Dus_library_e7k32f4k.aspx&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=Mfk2qtn1LTDThVkh6-oGglNfMADXfJdty4_bhmuhMHA&m=CDx6fJHiO_U5ya1dHZhv-O5nAU_botD-I7BAyxPZXZE&s=L5s90Jkxqk45FMvD7qA0Visu71cC_bqMyLK3h0RSZtU&e=>
and often see 10% or more reduction in the size of application code loaded
after using PGO for key scenarios (e.g. application launch).

yes. This is true for the GCC too. Clang's PGO does not shrink code size
yet.

Making application launch quickly is very important to us, and reducing
the number of code pages loaded helps with this goal.

Before we dig into turning it on, I’m wondering if there’s any
pre-existing research / case studies about possible code page reduction
seen from other Clang PGO-enabled applications? It sounds like there is
some possible instrumented run performance problems due to counter
contention resulting in sluggish performance and perhaps skewed profile
data: https://groups.google.com/forum/#!topic/llvm-dev/cDqYgnxNEhY
<https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_forum_-23-21topic_llvm-2Ddev_cDqYgnxNEhY&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=Mfk2qtn1LTDThVkh6-oGglNfMADXfJdty4_bhmuhMHA&m=CDx6fJHiO_U5ya1dHZhv-O5nAU_botD-I7BAyxPZXZE&s=YaUiiOgIrmA6Io5p4aWzmppYDAKyp8ddTwozd_l-Wjg&e=>.

Counter contention is one issue. Redundant counter updates is another major
issue (due to the early instrumentation). We are working on the later and
see great speed ups.

I’d like an overview of the optimizations that PGO does, but I don’t find
much from looking at the Clang PGO section:
http://clang.llvm.org/docs/UsersManual.html#profile-guided-optimization
<https://urldefense.proofpoint.com/v2/url?u=http-3A__clang.llvm.org_docs_UsersManual.html-23profile-2Dguided-2Doptimization&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=Mfk2qtn1LTDThVkh6-oGglNfMADXfJdty4_bhmuhMHA&m=CDx6fJHiO_U5ya1dHZhv-O5nAU_botD-I7BAyxPZXZE&s=cKiMsZqz31mbPqwGaH_hX2B8sTtFSJ65A4_vbF-fkB4&e=>
.

Profile data is not used in any IPA passes yet. It is used by any post
inline optimizations though -- including block layout, register allocator
etc.

For example, from reading different pages on how Clang PGO, it’s unclear
if it does “block reordering” (i.e. moving unexecuted code blocks to a
distant code page, leaving only ‘hot’ executed code packed together for
greater code density).

LLVM's block placement uses branch probability and frequency data, but
there is no function splitting optimization yet.

I find mention of “hot arc” optimization (-fprofile-arcs) , but I’m

unclear if this is the same thing. Does Clang PGO do block reordering?

It does reordering, but does not do splitting/partitioning.

David

Thanks! CIL [LeeHu] for a few comments…

Thanks! CIL [LeeHu] for a few comments…

*From:* Xinliang David Li [mailto:xinliangli@gmail.com]
*Sent:* Wednesday, May 27, 2015 9:29 AM
*To:* Lee Hunt
*Cc:* llvmdev@cs.uiuc.edu
*Subject:* Re: [LLVMdev] Capabilities of Clang's PGO (e.g. improving code
density)

Hello –

I’m an Engineer in Microsoft Office after looking into possible advantages
of using PGO for our Android Applications.

We at Microsoft have deep experience with Visual C++’s Profile Guided
Optimization
<https://urldefense.proofpoint.com/v2/url?u=https-3A__msdn.microsoft.com_en-2Dus_library_e7k32f4k.aspx&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=Mfk2qtn1LTDThVkh6-oGglNfMADXfJdty4_bhmuhMHA&m=CDx6fJHiO_U5ya1dHZhv-O5nAU_botD-I7BAyxPZXZE&s=L5s90Jkxqk45FMvD7qA0Visu71cC_bqMyLK3h0RSZtU&e=>
and often see 10% or more reduction in the size of application code loaded
after using PGO for key scenarios (e.g. application launch).

yes. This is true for the GCC too. Clang's PGO does not shrink code size
yet.

[LeeHu] Note: I’m not talking about shrinking code size, but rather
reordering it such that only ‘active’ branches within the profiled
functions are grouped together in ‘hot’ code pages. This is a very big
optimization for us in VC++ toolchain in PGO.

We also have the “/LTCG” flag – which is seemingly similar to the “-flto”
Clang flag -- that **does** shrink code by various means (dead code
removal, common IL tree collapsing) because it can see all the object code
for an entire produced target binary (e.g. .exe or .dll).

Does -flto also shrink code?

That depends on other options used (e.g, -Os). With LTO, compiler sees
larger scope, performs cross module inlines and dead function eliminations.
It does have more opportunities to shrink code.

    Making application launch quickly is very important to us, and
reducing the number of code pages loaded helps with this goal.

Before we dig into turning it on, I’m wondering if there’s any
pre-existing research / case studies about possible code page reduction
seen from other Clang PGO-enabled applications? It sounds like there is
some possible instrumented run performance problems due to counter
contention resulting in sluggish performance and perhaps skewed profile
data: https://groups.google.com/forum/#!topic/llvm-dev/cDqYgnxNEhY
<https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_forum_-23-21topic_llvm-2Ddev_cDqYgnxNEhY&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=Mfk2qtn1LTDThVkh6-oGglNfMADXfJdty4_bhmuhMHA&m=CDx6fJHiO_U5ya1dHZhv-O5nAU_botD-I7BAyxPZXZE&s=YaUiiOgIrmA6Io5p4aWzmppYDAKyp8ddTwozd_l-Wjg&e=>.

Counter contention is one issue. Redundant counter updates is another
major issue (due to the early instrumentation). We are working on the later
and see great speed ups.

I’d like an overview of the optimizations that PGO does, but I don’t
find much from looking at the Clang PGO section:
http://clang.llvm.org/docs/UsersManual.html#profile-guided-optimization
<https://urldefense.proofpoint.com/v2/url?u=http-3A__clang.llvm.org_docs_UsersManual.html-23profile-2Dguided-2Doptimization&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=Mfk2qtn1LTDThVkh6-oGglNfMADXfJdty4_bhmuhMHA&m=CDx6fJHiO_U5ya1dHZhv-O5nAU_botD-I7BAyxPZXZE&s=cKiMsZqz31mbPqwGaH_hX2B8sTtFSJ65A4_vbF-fkB4&e=>
.

Profile data is not used in any IPA passes yet. It is used by any post
inline optimizations though -- including block layout, register allocator
etc.

[LeeHu]: sorry for naïve question, but what is IPA?

Inter-procedural analysis/optimizations.

For example, from reading different pages on how Clang PGO, it’s unclear if
it does “block reordering” (i.e. moving unexecuted code blocks to a distant
code page, leaving only ‘hot’ executed code packed together for greater code
density). I find mention of “hot arc” optimization (-fprofile-arcs) , but
I’m unclear if this is the same thing. Does Clang PGO do block reordering?

A small clarification. Clang itself does not implement any
optimizations. Clang limits itself to generate LLVM IR. The
annotated IR is then used by some LLVM optimizers to guide decisions.
At this time, there are few optimization passes that use the profile
information: block reordering and register allocation (to avoid
spilling on cold paths).

There are no other significant transformations that use profiling
information. We are working on that. Notably, we'd like to add
profiling-based decisions to the inliner

Just a quick note about the inliner. Although the inliner itself
doesn't know how to use the profile, clang's IRGen has been modified
to add an 'inlinehint' attribute to hot functions and the 'cold'
attribute to cold functions. Indirectly, PGO does affect the
inliner. (We'll remove this once the inliner does the right thing on
its own.)

Hi David!

Thanks again for your help! I was wondering if you could clarify one thing for me?

I find mention of “hot arc” optimization (-fprofile-arcs) , but I’m unclear if this is the same thing. Does Clang PGO do block reordering?

It does reordering, but does not do splitting/partitioning.

I take this to mean that PGO does block reordering within the function? I don’t see that the clang drive passes anything to the linker to drive function ordering at the linker level as well. Is there something there that I missed, or are you aware of any readily available tools to do so? If not, we’ve done some work locally on enabling that which we will continue.

Thanks J

–randy

Hi David!

Thanks again for your help! I was wondering if you could clarify one
thing for me?

I find mention of “hot arc” optimization (-fprofile-arcs) , but I’m
unclear if this is the same thing. Does Clang PGO do block reordering?

It does reordering, but does not do splitting/partitioning.

I take this to mean that PGO does block reordering within the function? I
don’t see that the clang drive passes anything to the linker to drive
function ordering at the linker level as well. Is there something there
that I missed, or are you aware of any readily available tools to do so?
If not, we’ve done some work locally on enabling that which we will
continue.

Ok. There are three reordering related optimizations:

1) intra-procedural Basic Block Reordering to reduce branch cost, icache
miss and front-end stalls.
2) function splitting/partitioning -- splitting really code part of a
function into unlikely.text sections
3) function reordering based on affinity and hotness -- reordering
functions by the linker/plugin (guided by the compiler annotations).

Clang currently only does 1).

Hope this clarifies.

thanks,

David

David,

Yes, that is very helpful. Thanks!

–randy

Yes, thanks David!

For the intra-procedural Basic Block Reordering, do you have any data as to how much improvement that gives speed-wise for any perf tests you’ve measured?

I’m thinking this may speed things up for things like application launch by a couple %. For perf intensive code (e.g. spreadsheet recalc), I would expect it would be more.

Yes, thanks David!

For the intra-procedural Basic Block Reordering, do you have any data as
to how much improvement that gives speed-wise for any perf tests you’ve
measured?

Yes. Most of the benchmarks we have see improvement with better layout --
some improvement are small and some are large. Of course this also depends
on the layout algorithm, which we are working on improving too.

I’m thinking this may speed things up for things like application launch
by a couple %.

Function reordering may be more important for this, which needs call-trace
profile. The trace based layout will reduce # of page faults during program
starts.

David

For example, from reading different pages on how Clang PGO, it’s unclear if
it does “block reordering” (i.e. moving unexecuted code blocks to a distant
code page, leaving only ‘hot’ executed code packed together for greater code
density). I find mention of “hot arc” optimization (-fprofile-arcs) , but
I’m unclear if this is the same thing. Does Clang PGO do block reordering?

A small clarification. Clang itself does not implement any
optimizations. Clang limits itself to generate LLVM IR. The
annotated IR is then used by some LLVM optimizers to guide decisions.
At this time, there are few optimization passes that use the profile
information: block reordering and register allocation (to avoid
spilling on cold paths).

There are no other significant transformations that use profiling
information. We are working on that. Notably, we'd like to add
profiling-based decisions to the inliner

Just a quick note about the inliner. Although the inliner itself
doesn't know how to use the profile, clang's IRGen has been modified
to add an 'inlinehint' attribute to hot functions and the 'cold'
attribute to cold functions. Indirectly, PGO does affect the
inliner. (We'll remove this once the inliner does the right thing on
its own.)

OT: Can you give me a pointer to the clang code involved? I wasn't aware of this.

Have a look at `CodeGenPGO::applyFunctionAttributes()` around line
760 of lib/CodeGen/CodeGenPGO.cpp.

For example, from reading different pages on how Clang PGO, it’s unclear
if
it does “block reordering” (i.e. moving unexecuted code blocks to a
distant
code page, leaving only ‘hot’ executed code packed together for greater
code
density). I find mention of “hot arc” optimization (-fprofile-arcs) ,
but
I’m unclear if this is the same thing. Does Clang PGO do block
reordering?

A small clarification. Clang itself does not implement any
optimizations. Clang limits itself to generate LLVM IR. The
annotated IR is then used by some LLVM optimizers to guide decisions.
At this time, there are few optimization passes that use the profile
information: block reordering and register allocation (to avoid
spilling on cold paths).

There are no other significant transformations that use profiling
information. We are working on that. Notably, we'd like to add
profiling-based decisions to the inliner

Just a quick note about the inliner. Although the inliner itself
doesn't know how to use the profile, clang's IRGen has been modified
to add an 'inlinehint' attribute to hot functions and the 'cold'
attribute to cold functions. Indirectly, PGO does affect the
inliner. (We'll remove this once the inliner does the right thing on
its own.)

OT: Can you give me a pointer to the clang code involved? I wasn't aware of
this.

It is set in clang/lib/CodeGen/CodeGenPGO.cpp
CodeGenPGO::applyFunctionAttributes.

Note that it uses the function entry count to determine hotness. This
means that functions entered infrequently but containing very hot
loops would be marked cold, perhaps this works since it is only used
for inlining and is presumably a stand-in for call edge hotness. The
MaxFunctionCount for the profile is also the max of all the function
entry counts (set during profile writing).

Teresa

Right. We now also have function entry counts propagated into the IR. This gives the inliner a way to compute global hotness using entry counts and internal frequencies.

Diego.