RFC: A binary serialization format for MemProf

This RFC contains the following:

  • Proposal to introduce a new raw binary serialization format for heap allocation profiles

  • Proposal to extend the PGO indexed format to hold heap allocation profiles

We look forward to your feedback on the proposals.

Authors: snehasishk@google.com, davidxl@google.com, tejohnson@google.com

Introduction

FYI you can also view the RFC here https://groups.google.com/g/llvm-dev/c/h1DvHguLpxU , which displays the diagrams better (without extra wide space).

David

Hi Snehasish, David and Theresa,

I'm really glad to see the steady progress in this area!

It looks like the format is pretty much language independent
(correct?) -- so it can be applied not only to C/C++, but other
languages (Rust) and even toolchains (Go) as well? If you have already
considered using data profile for non-C/C++, may I kindly ask you to
share your thoughts on this?

Yours,
Andrey

Hi Andrey,

The serialization format is language independent, though our focus is C/C++. Note that our instrumentation is based on the LLVM sanitizer infrastructure and should work for Rust (supports building with sanitizers [1]). We have not considered using the data profile for non-C/C++ codes.

Regards,
Snehasish

[1] https://doc.rust-lang.org/beta/unstable-book/compiler-flags/sanitizer.html

I was going to respond similarly, and add a note that it isn’t clear that gollvm (LLVM-based Go compiler) supports either PGO or the sanitizers, so that may be more difficult than Rust which does. As Snehasish notes, we are focused on C/C++, but this will all be done in the LLVM IR level and should be language independent in theory.
Teresa

I was going to respond similarly, and add a note that it isn't clear that gollvm (LLVM-based Go compiler) supports either PGO or the sanitizers, so that may be more difficult than Rust which does. As Snehasish notes, we are focused on C/C++, but this will all be done in the LLVM IR level and should be language independent in theory.

+Than McIntosh to comment more on PGO and sanitizer support for gollvm.

David

Hi all,

Gollvm does not support the sanitizers at the moment. There is some support for PGO (driver plumbing and such) but it is in a fairly rudimentary state, mainly there to allow running experiments. It would need a good deal more work to make it production-quality.

Thanks, Than

Thanks Teresa and others for the clarification!

I was going to respond similarly, and add a note that it isn’t clear that gollvm (LLVM-based Go compiler) supports either PGO or the sanitizers, so that may be more difficult than Rust which does. As Snehasish notes, we are focused on C/C++, but this will all be done in the LLVM IR level and should be language independent in theory.

Let me note that I specifically meant gc (Google’s standard Go compiler), not gollvm. IMHO, there is an intrinsic value of data formats being unified among different toolchains – as very well demonstrated by DWARF.

(Yes, I’m aware that gc doesn’t support even ages-long instruction profiling. One of the reasons is the apparent lack of implemented optimizations that can directly benefit from profiling. In case of memory profiling, the use case is clear. Also, given that BOLT helps Go a lot (up to +20% speed-up on our internal tests), I expect the same for memory profiling, which will warrant extending gc capabilities to use MemProf format.)

Yours,
Andrey

+Than McIntosh again to comment on the gc question below.

Thanks Teresa and others for the clarification!

I was going to respond similarly, and add a note that it isn’t clear that gollvm (LLVM-based Go compiler) supports either PGO or the sanitizers, so that may be more difficult than Rust which does. As Snehasish notes, we are focused on C/C++, but this will all be done in the LLVM IR level and should be language independent in theory.

Let me note that I specifically meant gc (Google’s standard Go compiler), not gollvm. IMHO, there is an intrinsic value of data formats being unified among different toolchains – as very well demonstrated by DWARF.

(Yes, I’m aware that gc doesn’t support even ages-long instruction profiling. One of the reasons is the apparent lack of implemented optimizations that can directly benefit from profiling. In case of memory profiling, the use case is clear. Also, given that BOLT helps Go a lot (up to +20% speed-up on our internal tests), I expect the same for memory profiling, which will warrant extending gc capabilities to use MemProf format.)

I don’t think the gc compiler even involves llvm as it is written in Go. So that’s definitely outside the scope of our work. I’m not personally very familiar with Go compiler toolchains and their roadmaps, but Than can probably comment.

Teresa

I don’t think the gc compiler even involves llvm as it is written in Go.

Correct.

I’m not personally very familiar with Go compiler toolchains and their roadmaps, but Than can probably comment.

I don’t see any reason why something similar to what Teresa and Snehasish are proposing couldn’t be implemented for the Go gc-based toolchain (with a significant amount of effort)-- from my reading it looks fairly language independent.

True, as previously pointed out, the gc-based Go toolchain currently doesn’t support ASAN and lacks any sort of PGO/FDO capability, but this is not written in stone. FDO support, along with improving the compiler back end to exploit profile data (via inlining, basic block layout, etc) is something that could be added if need be. Go’s priorities have simply been different from those of C/C++.

IMHO, there is an intrinsic value of data formats being unified among different toolchains – as very well demonstrated by DWARF

Comparison with DWARF seems a bit odd here. I agree that unified formats can be useful, but I would point out that there is a great deal of administrative overhead associated with standards like DWARF (committee meetings, heavyweight processes for reaching consensus on new features, release cycles measured in years, etc).

Go (for example) uses its own object file format, as opposed to using an existing standard format (e.g. ELF or PE/COFF). The ability to modify and evolve the object file format is a huge enabler when it comes to rolling out new features. It was a key element in the last two big Go projects I’ve worked on; had we been stuck with an existing object file format, the work would have been much more difficult.

Than

Hello Snehasish,

It’s great to see this RFC about the profile format of the heap profiler since last the RFC. I have a couple questions about how calling contexts are stored and processed by the compiler:

Hello Snehasish,

It’s great to see this RFC about the profile format of the heap profiler since last the RFC. I have a couple questions about how calling contexts are stored and processed by the compiler:

  1. How are recursive allocation contexts stored? Wondering if there’s any recursive compression performed. For example, a tree-based construction algorithm may create tree nodes recursively.

This is handled by memprof/sanitizer runtime. Teresa can point to the answer.

  1. Is each tree node object modeled by its unique dynamic context?

I suppose the depth of the allocation contexts for tree nodes can be different.

  1. Will the contexts of a leaf function merged during compilation when the leaf function is not inlined? If so, where does the merging happen?

The merging/pruning/trimming can happen during the offline processing step to reduce profile size. The merging should be based on whether contexts are providing differenting information or not. Merging contexts with different profile properties won’t help optimizations.

thanks,

David

Hello Snehasish,

It’s great to see this RFC about the profile format of the heap profiler since last the RFC. I have a couple questions about how calling contexts are stored and processed by the compiler:

  1. How are recursive allocation contexts stored? Wondering if there’s any recursive compression performed. For example, a tree-based construction algorithm may create tree nodes recursively.

This is handled by memprof/sanitizer runtime. Teresa can point to the answer.

Actually, they are not currently compressed by the runtime, and memprof follows the same mechanism as the sanitizer which simply truncates long stack contexts. This is something we can and should improve in the memprof runtime, however, and fold the recursive call chains.

  1. Is each tree node object modeled by its unique dynamic context?

I suppose the depth of the allocation contexts for tree nodes can be different.

  1. Will the contexts of a leaf function merged during compilation when the leaf function is not inlined? If so, where does the merging happen?

Can you give a specific example of the case you are concerned about?
Teresa

Hi Hongtao,

How are recursive allocation contexts stored? Wondering if there’s any recursive compression performed. For example, a tree-based construction algorithm may create tree nodes recursively. Is each tree node object modeled by its unique dynamic context?
There is no special handling of recursive calling contexts, we store the entire unique dynamic calling context as the identifier.

Will the contexts of a leaf function merged during compilation when the leaf function is not inlined? If so, where does the merging happen?

During compilation, each allocation site may be annotated with one or more heap allocation info blocks each identified by a unique dynamic calling context. We will not merge heap profile information across unique contexts as one of our immediate goals is to distinguish between hot and cold allocation contexts. The mechanism to distinguish the allocation contexts involve cloning or parameterization and Teresa will present the details in an upcoming RFC.

Hi Snehasish, Teresa and David,

Thanks for the information. I have another question about the optimized (pass2) build. Does the runtime heap allocator identify a heap object using calling contexts too? Would sort of virtual unwinding plus processing of debug inline contexts needed?

Thanks,
Hongtao

Hi Hongtao,

Consider the following example with two contexts -

foo // This function is hot
   bar
       malloc()
baz // This function is cold
   bar
       malloc()

The profile loader will annotate the call to malloc() in the IR with
two contexts and their characteristics. Since one context is hot and
the other is cold, their characteristics differ (as David noted) and
we will not merge the contexts during profile processing. Now there
are a few ideas on how the allocator can determine whether this is a
hot or cold allocation at runtime --

1. Static deduplication via cloning - we can clone bar and rewrite the
call to malloc with a special call which indicates that it is cold.
The second example above would then look like --
baz
   bar_cold
       malloc_cold()
While this involves code duplication potentially increasing
icache/itlb footprint, for cold enough contexts we can tune the
threshold so that the benefit outweighs the cloning costs.

2. Parameterization - we can parameterize bar to carry additional
information that this current context is cold. Thus the code would
look like this --
baz
  bar_parameterized (/*is_cold_context=*/ true)
    if (is_cold_context) malloc_cold()
    else malloc()
This will lead to code bloat on hot paths. This can also lead to a
large amount of parameterization when there are interleaving cold
contexts, increasing register pressure along hot paths. An optimized
approach may be able to pack the information using some encoding.

3. Runtime calling context identification - As you suggested, the
allocator can identify the heap object using the calling context. An
implementation might look like this --
baz
  bar
     malloc()
        id = get_context()
        if (is_context_cold(id)) malloc_cold
        else ...
I believe the overheads of this approach is fairly high since the
context identification will happen at each dynamic call. E.g Sumner et
al measured the overhead to be ~2% overall for medium size programs in
"Precise Calling Context Encoding". We anticipate runtime
identification of calling contexts on large workloads to be
prohibitively high.

Note that these are just a few ideas and we are currently leaning
towards (1). Happy to hear about any motivating data you may have for
these approaches, though an in-depth discussion of this should
probably be reserved for an RFC which Teresa will share soon.

Hi Snehasish,

Thanks for the analysis of different potential solutions to context identification problem. Regarding #3, I’m wondering if a frame-chain based virtual unwinding for heap allocation only could speed it up. But yeah, perhaps it would be more appropriate to move the discussion there in the upcoming RFC from Teresa.

Thanks,

Hongtao

Thanks for sharing the progress and details on the binary format. Overall this looks like a clean design that fits current PGO profile format with extensions.

Some high level comments:

  • Does memprof/PGHO work together with today’s IRPGO today, i.e. can we have one instrumented build to collect both PGO and PGHO profile, or we will need separate PGO instrumentation builds for each, in which case CSPGO + PGHO would need three iterations of training and build, which would be significant operational cost…
  • I think some of the problems memprof faced when dealing with storing calling context and mapping context to IR is very similar to CSSPGO. I’m wondering if it makes sense to promote some existing infrastructure to be more general beyond just serving CSSPGO. One example is the IR mapping you mentioned (quoted below). In CSSPGO, we have the exact same need, and it’s handled by SampleContextTracker which queries a context trie using an instruction/DILocation.

> Because the MIB corresponding to the A->B context is associated with function B in the profile, we do not find it by looking at function A’s profile when we see function A’s malloc call during matching. To address this we need to keep a correspondence from debug locations to the associated profile information.

  • The serialization of calling context, pruning of calling context are also example of shared problems, and we’ve put in some effort to have effective solutions (e.g. offline preinliner for most effective pruning, which I think could be adapted to help keep most important allocation context). Perhaps some of the frameworks can be merged, so LLVM has general context aware PGO support that can be leverage by different kinds of PGO (IRPGO, PGHO, CSSPGO). If you think this is worth pursuing, we’d be happy to help too.

More on the details:

  • I saw that MemInfoBlock contains alloc/dealloc cpuid, does that make memprof profile non-deterministic in the sense that running memprof twice on the exact program and input would yield bit-wise different memory profile? I think IR PGO profile is deterministic?

  • Why do we use file:line:discriminator instead of func:line_offset:discriminator ? The later would be more resilient to source change. If function name string is too long, we could perhaps leverage the MD5 encoding used by sample PGO?

  • Is the design of mmap section (quoted below) trying to support memprof for multiple binaries in the same process at the same time, or mainly for handling multiple non-consecutive executable segments for a single binary?

> The process memory mappings for the executable segment during profiling are stored in this section. This allows symbolization during post processing for binaries which are built with position independent code. For now all read only, executable mappings are recorded, however in the future, mappings for heap data can also potentially be stored.

  • Do we need each function record to have its own schema, do we expect different functions to use different versions/schemas? The is very flexible, but wondering what’s the use case. If the schema is for compatibility across versions, perhaps a file level scheme would be enough?

> The InstrProfRecord for each function will hold the schema and an array of Memprof info blocks, one for each unique allocation context.

Thanks,

Wenlei

Just a quick note – IRPGO profile is not deterministic with multi-threaded programs due to contentions (there is of course atomic update mode, but it can be slow). Asynchronous dumping is another reason that the profile is not guaranteed to be repeatable.

David

Hi Wenlei,

Thanks for taking a look! Added responses inline.

Just a quick note -- IRPGO profile is not deterministic with multi-threaded programs due to contentions (there is of course atomic update mode, but it can be slow). Asynchronous dumping is another reason that the profile is not guaranteed to be repeatable.

David

Thanks for sharing the progress and details on the binary format. Overall this looks like a clean design that fits current PGO profile format with extensions.

Some high level comments:

Our focus is to have a single combined IR instrumentation and PGHO
instrumentation phase to keep operational costs low. For CSPGO today,
this would be the second IR instrumentation phase. We also intend to
support a separate PGHO instrumentation phase.

Does memprof/PGHO work together with today's IRPGO today, i.e. can we have one instrumented build to collect both PGO and PGHO profile, or we will need separate PGO instrumentation builds for each, in which case CSPGO + PGHO would need three iterations of training and build, which would be significant operational cost..

Yes, the context tracker is quite relevant to the IR matching need.
Teresa will share the detailed design soon and we can evaluate the
benefit of reusing the existing logic for CSSPGO. I think this is
orthogonal to this RFC (serialization format) so we can defer to the
next one for a detailed discussion.

I think some of the problems memprof faced when dealing with storing calling context and mapping context to IR is very similar to CSSPGO. I'm wondering if it makes sense to promote some existing infrastructure to be more general beyond just serving CSSPGO. One example is the IR mapping you mentioned (quoted below). In CSSPGO, we have the exact same need, and it's handled by `SampleContextTracker` which queries a context trie using an instruction/DILocation.

          > Because the MIB corresponding to the A->B context is associated with function B in the profile, we do not find it by looking at function A’s profile when we see function A’s malloc call during matching. To address this we need to keep a correspondence from debug locations to the associated profile information.

We intend to retain as much of the calling context information until
the IR matching. This is where we can leverage common solutions. We
would be happy to generalize where appropriate and intend to tackle
this topic in detail in the next RFC.

The serialization of calling context, pruning of calling context are also example of shared problems, and we've put in some effort to have effective solutions (e.g. offline preinliner for most effective pruning, which I think could be adapted to help keep most important allocation context). Perhaps some of the frameworks can be merged, so LLVM has general context aware PGO support that can be leverage by different kinds of PGO (IRPGO, PGHO, CSSPGO). If you think this is worth pursuing, we’d be happy to help too.

More on the details:

As David mentioned, keeping the PGHO profile deterministic is a
non-goal since IR PGO profile is non-deterministic.

I saw that MemInfoBlock contains alloc/dealloc cpuid, does that make memprof profile non-deterministic in the sense that running memprof twice on the exact program and input would yield bit-wise different memory profile? I think IR PGO profile is deterministic?

We need to use the file path instead of the function to be able to
distinguish COMDAT functions. The line_offset based matching is more
resilient if the entire function is moved, I think it's a good idea
and we can incorporate it into the IR matching phase.

Why do we use `file:line:discriminator` instead of `func:line_offset:discriminator `? The later would be more resilient to source change. If function name string is too long, we could perhaps leverage the MD5 encoding used by sample PGO?

While we only intend to support Memprof optimizations for the main
binary, retaining all executable mappings allow future analysis tools
to symbolize shared library code.

Is the design of mmap section (quoted below) trying to support memprof for multiple binaries in the same process at the same time, or mainly for handling multiple non-consecutive executable segments for a single binary?

           > The process memory mappings for the executable segment during profiling are stored in this section. This allows symbolization during post processing for binaries which are built with position independent code. For now all read only, executable mappings are recorded, however in the future, mappings for heap data can also potentially be stored.

Yes, we do intend to support Memprof profile section merging via
`llvm-profdata merge`. The schema overhead per function is low now, so
we opted for function granularity. We can revisit if the overheads are
high or if the IR metadata scheme intends to keep it at module
granularity (in which case we don't need the extra fidelity).