Computing block profile weights

Hello,

I’m working on an application that would benefit from knowing the weight of a basic block, as in “fraction of the program’s execution time spent in this block”.

Currently, I’m computing this using the block’s frequency from BlockFrequencyInfo, relative to the function’s entry block frequency, and scaled by the function’s entry count. This is also the computation that’s done by getBlockProfileCount in lib/Analysis/BlockFrequencyInfoImpl.cpp.

The problem is that this method can be extremely imprecise, because many functions have an entry count of zero. The entry count is computed from the number of profile samples in the entry block. Depending on the function’s CFG, this count can be arbitrarily low even though the function is frequently called or hot.

Here’s an idea to address this. I’d like to collect a bit of feedback from the community before trying it out.

  1. Instead of relying on a function’s entry count, store the total number of samples in a function. This number is readily available from the profile loader.
  2. Compute a block’s weight as function_samples * block_weight / sum_of_block_weights_in_function

Why do I like this?

  • Total samples in a function gives a good impression of the importance of a function, better than the entry count.
  • This scheme “preserves mass” in that all samples of a function are taken into account. The samples in a BB are compared to samples in the entire function, rather than a few (arbitrarily) selected samples from the entry block.
  • The computation avoids imprecision from multiplying by small numbers.

Disadvantages?

  • BlockFrequencyInfo needs to keep track of the total frequency in a function.
  • BlockFrequencyInfo would probably scale the frequencies w.r.t. that total, rather then the maximum frequency. This loses a few bits of precision

Note that the entry count would not be lost in this scheme; one could easily compute it as function_samples * entry_weight / sum_of_block_weights_in_function.

I believe using an entire function as unit of reference is a good compromise between precision and modularity. Precision is high because there’s a sufficient number of samples available in a function. Modularity is preserved because the computation does not need to take other functions into account (in fact, BlockFrequencyInfo already processes one function at at time).

What do people think about this?

  • Jonas

An addendum to fix a mistake in terminology:

  1. Compute a block’s weight as function_samples * block_weight / sum_of_block_weights_in_function

This should be function_samples * block_frequency / sum_of_block_frequencies_in_function

Cheers,
Jonas

Hello,

I’ve implemented my idea; a patch is attached. My code computes block weights as follows:

w = f[bb] / sum(f[b] for b in func) * sum(s[b] for b in func)

Where f is the frequency of a basic block (as computed by BlockFrequencyInfo), func is the function that contains bb, and s is the number of profiling samples in a block.

Previously, the computation was done as follows:

w = f[bb] / f[entry] * s[entry]

Where entry is the entry block of the function containing bb.

At first glance, block weights look more stable. There are fewer cases where weights are zero, because there is a sufficient number of samples in all functions of interest. There are also fewer cases where block weights are unrealistically high, because the weight of a block is now limited to the total number of samples in the function.

Questions to the community:

  • What do you think about this method of computing block weights?
  • There are some cases where I’m unsure how this method behaves, e.g., with inlining. Thoughts about this are welcome.
  • (since this is the first time I’m upstreaming a change to LLVM) What would it take to get this into LLVM?

Best,
Jonas

0001-Use-profile-samples-to-compute-block-weight.patch (36.6 KB)

Jonas, I assume you are talking about Sample-Based PGO. Yes, the problem you mentioned exists – and your proposed solution seems reasonable. +dehao for comments.

David

Hi,

Jonas, I assume you are talking about Sample-Based PGO. Yes, the problem you mentioned exists – and your proposed solution seems reasonable. +dehao for comments.

Yes, the problem is present in sample-based PGO, and so far this is the only case I’ve tested.

The patch contains a little bit of code to also compute the total number of samples for instrumentation-based PGO, and so it should not break this case. But I don’t expect improvements for instrumentation-based PGO, because is has accurate function entry counts.

Best,
Jonas

Your proposal seem reasonable to me. Another approach is, instead of using total frequency, use max frequency to scale. This prevents the cases when partial branch probability is missing, which makes BFI mistakenly increase/decrease the frequency of some BB.

I’m curious what optimizations relies on global hotness of a basic block. Inliner is one of them and we already address the issue by extractProfTotalWeight. But if you want to use this after inliner, inliner actually need to scale/update metadata to make it correct.

Dehao