David Blaikie <email@example.com> writes:
> -Xclang -fmodules-codegen
> -Xclang -fmodules-debuginfo
> It should work naturally either way, I think.
Yep, all works now, thanks.
Somewhat related question: what happens if I pass -fPIC? I assume I
will need different .o for PIC/non-PIC. Will I also need different
I tend to think the answer is yes since at the minimum -fPIC may
define preprocessor macros that may change the translation unit.
If a flag changes things like preprocessor macros then it would need to be
consistently passed to all stages (including the first one - that generates
the .pcm). I know Clang has some enforcement of matching flags between PCM
generation steps and uses (& hopefully that also triggers on the
PCM->Object step too, though I haven't checked).
I'm not sure how lenient those checks are (whether they have many false
positives (flagging mismatched flags that are benign/can be composed
without conflicting) or false negatives (allowing mismatched flags that are
Richard might be able to say more about that.
We have a bunch of flags whitelisted that are permitted to change between
module build and use. We permit flags to change between module build and
use if the flag change does not affect interoperability of the pcm file,
even though in some cases the resulting combination doesn't make a lot of
sense. For example, flags that affect predefined macros (__OPTIMIZE__,
__PIC__, ...) *are* permitted to vary between module build and use, with
each side seeing the value of the macro as it was defined for that
compilation, and flags that affect minor details of the language mode are
also permitted to vary.
The whitelist is likely incomplete (there are probably flags for which it
would be useful and reasonable for them to differ between module build and
use, but where we disallow them differing), and as noted above allows some
combinations that work (we'll do what you asked us to do) but might not
make a lot of sense.
I think -O flags also generate preprocessor defines, so I expect they would
be checked/enforced by this too, and not possible to use a PCM built with a
different -O level than its use.
I guess what I am trying to understand is if .pcm is just the AST
or if it actually includes some object/intermediate code? For example,
does it make sense to pass -O when generating .pcm? Sorry if these
are stupid/obvious questions ;-).
In our default configuration, the .pcm file is just the AST (at least for
now). We also have a mode in which the .pcm file is actually a .o file that
contains the AST as well as debug information, and there's been some
discussion of including other things in it too, for example LLVM IR for
inline functions (to speed up optimized compilation of module users).
Even now, it makes sense to pass those flags, and as David says, the
easiest thing to do is to pass the same flags you use for source
compilations to module compilations. As an example of a case where this
makes a difference today, the glibc headers will sometimes provide
different definitions of C standard library functions based on whether the
__OPTIMIZE__ macro is defined, and if you want the module build to take
advantage of that, you need to make sure you pass -O<n> to the module
The intent is that the same set of flags would typically be passed to a
compilation building the module interface for a particular module as would
be passed to a compilation building the module interface. That way, the
user can decide on a per-module basis how they want the compiler to act
(including things like which warnings they want enabled for their module,
and perhaps some details of the language it's written in).