[Modules] Faster compilation speed or better diagnostic messages?

Hi,

Recently we came to a decision point about the implementation of modules. Due to one of the decision may cause regressions of user experiences. I feel it should be better to give it more visibilities.

Here is a feature to discard declarations in the GMF. For example,

// foo.h
int f();
int g();

// M.cppm
module;
#include "foo.h"
export module M;
export using ::f;

Here, the declaration of g() in module unit M.cppm is discardable. Informally, we can think there is never a declaration g in M.cppm.

Previously we didn’t implement this feature. And currently if users of module M wrote g() in their source code, they will get diagnostics like "please try to export 'g'".

So here are two paths to implement these features:

  1. Don’t write the discardable declarations to the BMI.
  2. Don’t read the discardable declarations during deserialization. But note that we’ll only know it is discardable after deserialization. So this actually means partially deserialization.

I think the first method matches the design intention and it can have better compilation speed and smaller BMI size. But the drawback of the first method is that we’ll cause regressions. I mean previously when we use the non-exported decls, the compiler will note us we forgot it. But now, the compiler would only say something like “unknown identifier”, which may look like a regression in the user experiences.

While personally, I prefer the first method. I’d like to ask for the opinions from users for more wider visibilities.

CC: @mordante @iains

Thanks,
Chuanqi

1 Like

FWIW, my initial implementation did elide the decls completely but - as @ChuanqiXu notes - we changed this to preserve the user diagnostics.

1 Like

CC @DanielaE

Do you have any kind of benchmark/idea of the cost of doing with option 2?

I’m fine with option 1 regardless, if that ends up being a problem, there may be a 3rd solution:
Doing a partial serialization where we would just have a list of discarded names we could query, or something along these lines. Best of both worlds?

Stupid question: are there any possible objection to having non-reachable symbol names visibles in BMIs, from a privacy/security standpoint? I wouldn’t think so, but I’d rather ask.

1 Like

Actually, if it’s a public header (e.g. the libc++ ones) then of course there’s no issue.

If the information is ‘sensitive’ commercially, then I think you’d need to construct your named module with partitions implementing the sensitive portion - if you have sensitive information in a GMF, then at minimum the reachable decls have to remain - which seems to make complete privacy impossible?

1 Like

Do you have any kind of benchmark/idea of the cost of doing with option 2?

Not yet. Since we didn’t implement either options. Also we don’t have a good benchmark for modules yet.

The background here is that I found the large module (the std module) use in a small pragram (~15 lines) compiles slower than the original direct #include. This is not good. And the direct cause is that we have too many duplicated headers in the partitions of the module. And each partition may only use a small set of the declarations. So I think option1 will mitigate/solve the issue.

Stupid question: are there any possible objection to having non-reachable symbol names visibles in BMIs, from a privacy/security standpoint? I wouldn’t think so, but I’d rather ask.

(IIUC) I don’t think so. Because from the latest consensus of SG15, the BMI should be a transparent layer and we’re going to ship the source files. It means we won’t ship BMIs. So it is meaningless to talk about the privacy/security problems.

1 Like

@ChuanqiXu @iains Thanks for the answers

Then my vote is to try option 1, and if that resolves the compile times of std i think it’s worth it. If we still want to have the discarded symbols for diag purposes, maybe we can, later, add them back in a way that only require deserialization on lookup error - as we usually stop caring about perf once we start emitting errors.

You must not put anything sensitive into any module-TU that creates a BMI (i.e. PMIF, module interface partitions, or internal partitions) or incorporates a BMI from an external module. All of these require to be available in source to compile the BMIs from. This is the same as with non-modular libraries. Modules don’t increase protection when it comes to sensitive information.

1 Like

hmm (probably we should make this a separate discussion)

  • as @ChuanqiXu points out, since you have to ship enough of the sources to rebuild the BMIs, then obviously those cannot include sensitive information.
  • PMFs are likely irrelevant in this - since they imply a single TU module, and therefore no isolation at all.
  • Where we need some more discussion is in the use of internal partitions. As I understand things, one can arrange those so that none of their sources appear in any exported interface (i.e. these partitions are only ever referred to in implementation TUs) and thus you do not need to distribute any sources or (hypothetical) BMIs that would include their symbols. I suppose we should sketch up an example to illustrate this.
1 Like

Right, PMFs are irrelevant (or better: exactly as relevant as the PMIF because the former is a part of the latter), and therefore I didn’t mention it.

To be more precise: the PMIF, and everything that is (possibly transitively) imported into the PMIF (i.e. any form of partitions), plus anything that is (possibly transitively) #included into the purviews of said TUs or their GMFs must be available as source code to enable the compilation of the PMIF of a named module. No sensitive information should appear in any of these sources/headers.

1 Like

Personally, I prefer the option that gives the best user experience. One can argue that faster compilation speeds and smaller BMI size gives the better user experience on the assumption that most compilations will not need the extra information for diagnostic purposes, so smaller/faster is better. One can also argue that unactionable diagnostics are a major waste of developer time and lacking symbols for good diagnostics can lead to making something unactionable; it doesn’t matter how fast the compile is when you can’t get the code to work.

I think it would help to have some performance numbers. 2% speed/size vs 20% speed/size may help us reach a decision more easily. Lacking concrete details like that, and given the extreme complexity of diagnostics in C++ code and that nobody uses C++ for its blazing fast compile times, my intuition is to err on the side of having better diagnostics. (I know the hope with WG21 modules is to have improved compile times, but I do not think those improvements should come at the expense of a worse diagnostic experience than using includes unless the improved compile times are quite significant.)

1 Like

Please let’s not forget [module.global.frag]/4

A declaration D in a global module fragment of a module unit is [discarded] if D is not decl-reachable from any [declaration] in the [declaration-seq] of the [translation-unit][.]

[Note [3:

A discarded declaration is neither reachable nor visible to name lookup outside the module unit, nor in template instantiations whose points of instantiation ([[temp.point]] are outside the module unit, even when the instantiation context includes the module unit[.]

— end note]

How are discarded declaration of any help to users if they are neither visible nor reachable? Are you referring to something different that is neither of those and is still present in the BMI?

Apologies if this is a dumb question, but, are we using ExternalASTSource for this situation? This is typically used when we want to only partially deserialize an AST node so we can retain the fact that it exists but without the full cost of deserialization until we need it.

Here, I am reporting what we do - not claiming that it is optimal.

… this is where we do some magic when we are looking up decls in response to a potential typo or missing export. We actually relax the visibility requirements so that we can see things that might be a typo’ed name or a missed export. The latter allows diagnostics like “you must export ‘foo’ before using it”.

So, if we actually elide (as in physically remove) the decls - then that mechanism no longer works, so what we have done in the existing patch is to mark the decls as elided (but keep them in the BMI), @ChuanqiXu is proposing to revert to the mechanism where we physically remove them - which will result in BMIs that are smaller and faster to process - but we will lose the ability mentioned here.

For my part, I am OK with the removal - and also agree with @AaronBallman that it’s easier to decide with some metrics (which we do not have yet).

1 Like

May be you want to look at so-you-want-to-use-modules-cross-plattform-cpponsea.pdf page 27ff. to see what the current situation is and weigh alternatives. The code for the argparse module is here

@iains : the link to the full code for the demo we’ve been discussing in Varna is found on the last slide.

2 Likes

I too would like to see the size and speed differences between the two options. In general I prefer better diagnostics, but it depends on the cost.

Another question. If the declaration of g is a definition what is stored? Only the named declaration or the entire definition?

Right now we just stream the AST - so the entire thing is in the BMI; this supports running codegen on the BMI.

One of the patch sets in my queue (and being refreshed) is to allow us to split the AST into two paths - one for codegen and one for the interface. This opens the way for stripping non-inline function bodies and other optimisations for the BMI.

GMF decl elision happens to be an optimisation that can be applied without making that split (modulo the discussion on diagnostic QoI).

After the split I assume it would be cheap to “keep” the elided decls in the interface part to provide better diagnostics. If so would that be an option?

I do not think so - it is the BMIs that we need to prune (more than the AST that travels onward to the codegen).

This is:
(a) because they are the semi-persistent entities that are consumed by multiple compilation jobs (the performance benefit of building a module is questionable unless there are multiple uses).

(b) one of the larger burdens [maybe the largest @ChuanqiXu ?] in the consumer(s) is merging decls - that is not helped at all unless we elide the unused ones.

AFAIR, we do as much ‘lazily’ as we can at the moment (e.g. so that we should not read a function body if we do not need it - although it still occupies space unnecessarily).

edit: @cor3ntin made some suggestions earlier in the thread about potentially storing reduced content to help with diagnostic QoI.

Thanks for the info.

I like @cor3ntin’s suggestion too.