Compiler Driver Decisions

LLVMers,

Since there's been little feedback on the design document I sent out, some decisions are being made in order to progress the work. If you have strong feelings about any of these, voice them now!

1. Name = llvmcc
2. The config file format will resemble Microsoft .ini files
    (name=value in sections)
3. -O set of options will control what gets done and what kind of output
    is generated.

I'm going to start documenting the design and usage of llvmcc in a new HTML document in the docs directory. You can comment on the design forther from the commits, if you like.

Reid.

Dear All,

I thought I would chime in with some ideas and opinions:

o Configuration Files

If it isn't too much trouble, I think we should go with XML for the following reasons:

1) We wouldn't need to implement a parsing library. There are several XML parsing libraries available, and I'm guessing that they're available in several different programming languages (Reid, am I right on that?).

2) It makes it easier for other programmers to write tools that read, modify, and/or write the configuration file correctly. If my assumption about XML libraries being available in several different languages is correct, then that means we don't need to write a library for each language that people want to use.

3) I believe it would keep the format flexibile enough for future expansion (but again, Reid would know better here).

Having configuration files that can be manipulated accurately is important for things like automatic installation, GUI's, configuration tools, etc.

o Object Files

I've noticed that there's a general agreement that we should not encapsulate LLVM bytecode files inside of another object format (such as ELF). However, I'd like to pose a few potential benefits that encapsulation in ELF might provide:

1) It may provide a way for standard UNIX tools to handle bytecode files without modification. For example, programs like ar, nm, and file all take advantage of the ELF format. If we generated LLVM ELF files, we wouldn't need to write our own nm and ar implementations and port them to various platforms.

2) It could mark the bytecode file with other bits of useful information, such as the OS and hardware on which the file was generated.

3) It may provide a convenient means of adding dynamic linking with other bytecode files.

4) It may provide a convenient place to cache native translations for use with the JIT.

Here are the disadvantages I see:

1) Increased disk usage. For example, symbol table information would duplicate the information already in the bytecode file.

2) Automatic execution. Ideally, if I have a bytecode executable, I want to run it directly. On UNIX, that is done with #!<interpreter>. I believe ELF provides similar functionality (where exec()ing the file can load a program or library to do JIT compilation), but if it doesn't, then we lose this feature.

o Compiler Driver Name

I'd vote for either llvmcc (llvm compiler collection) or llvmcd (llvm compiler driver). To be more convenient, we could call it llc (LLvm Compiler) or llcd (LLvm Compiler Driver). Calling it llc would require renaming llc to something else, which might be appropriate since I view llc as a "code generator" and not as a "compiler" (although both terms are technically accurate).

Generally, I recommend keeping the name short and not using hyphens (because it's slower to type them).

o Optimization options

I agree with the idea of using -O<number> for increasing levels of optimization, with -O0 meaning no optimization. It's a pretty intuitive scheme, and many Makefiles that use GCC use the -O option.

-- John T.

o Object Files

I've noticed that there's a general agreement that we should not
encapsulate LLVM bytecode files inside of another object format (such
as ELF). However, I'd like to pose a few potential benefits that
encapsulation in ELF might provide:

1) It may provide a way for standard UNIX tools to handle bytecode
files without modification. For example, programs like ar, nm, and
file all take advantage of the ELF format. If we generated LLVM ELF
files, we wouldn't need to write our own nm and ar implementations and
port them to various platforms.

System `nm' has no meaning if it's run on an LLVM bytecode file. Right
now, we already have an llvm-nm, and that works by finding the *LLVM*
symbols, globals and functions, and prints out whether they are defined
or not.

If we just plop the binary LLVM bytecode in an ELF section, it will go
happily ignored by the system nm, and no useful output will be produced.

So, in essence, we *do* need our own nm, ar, etc. Otherwise, what
you're suggesting is that any bytecode file is in its own ELF section
with a *FULL* native translation separately from it, which is overkill,
IMHO.

2) It could mark the bytecode file with other bits of useful
information, such as the OS and hardware on which the file was
generated.

We already have that: in addition to pointer size, Reid as added the
capability to encode the target triple of the system directly into the
bytecode file.

3) It may provide a convenient means of adding dynamic linking with
other bytecode files.

Reid has added this as well.

4) It may provide a convenient place to cache native translations for
use with the JIT.

This an interesting concept, but it seems to be the only one of four
left, and I'm not sure it's worth the trouble of writing and re-writing
and re-patching native code to support this...

Here are the disadvantages I see:

1) Increased disk usage. For example, symbol table information would
duplicate the information already in the bytecode file.

True that.

2) Automatic execution. Ideally, if I have a bytecode executable, I
want to run it directly. On UNIX, that is done with #!<interpreter>.
I believe ELF provides similar functionality (where exec()ing the file
can load a program or library to do JIT compilation), but if it
doesn't, then we lose this feature.

1. Use LLEE :slight_smile:
2. Tell the OS (in this case Linux) how to run bytecode files directly:
   http://llvm.cs.uiuc.edu/docs/GettingStarted.html#optionalconfig

o Compiler Driver Name

I'd vote for either llvmcc (llvm compiler collection) or llvmcd (llvm
compiler driver). To be more convenient, we could call it llc (LLvm
Compiler) or llcd (LLvm Compiler Driver). Calling it llc would
require renaming llc to something else, which might be appropriate
since I view llc as a "code generator" and not as a "compiler"
(although both terms are technically accurate).

I've voted for llvmcc before, but it was turned down.

LLC is a nice idea, but yeah, it's already taken, and sounds like LCC
which is another compiler...

llvmcd sounds like "chdir compiled to llvm" or "LLVM-specific chdir"
given the other tools: llvm-as, llvm-gcc, etc.

o Optimization options

I agree with the idea of using -O<number> for increasing levels of
optimization, with -O0 meaning no optimization. It's a pretty
intuitive scheme, and many Makefiles that use GCC use the -O option.

I agree with -O0 instead of -On.

o Configuration Files

If it isn't too much trouble, I think we should go with XML for the
following reasons:

1) We wouldn't need to implement a parsing library. There are several
XML parsing libraries available, and I'm guessing that they're available
in several different programming languages (Reid, am I right on that?).

Yes, there are many to choose from. But, some of them are larger than
LLVM :). We'd choose expat (fast, simple, small, good compatibility,
lacks features we don't need). Either that or just write a really simple
recursive descent parser.

2) It makes it easier for other programmers to write tools that read,
modify, and/or write the configuration file correctly. If my assumption
about XML libraries being available in several different languages is
correct, then that means we don't need to write a library for each
language that people want to use.

Not sure what you mean here. What's an XML library and are they supposed
to be available in different natural languages or different computer
languages or programming languages or ??Do you mean natural languages?

3) I believe it would keep the format flexibile enough for future
expansion (but again, Reid would know better here).

Yes. It wouldn't be painless, but going from DTD1 -> DTD2 is much less
painful than going from INI -> XML. That is, the ENTIRE format doesn't
have to change, its just incrementally changing its document type
definition within the XML format.

Having configuration files that can be manipulated accurately is
important for things like automatic installation, GUI's, configuration
tools, etc.

Yes, that was my main argument too .. precision for us and others.

o Object Files

I've noticed that there's a general agreement that we should not
encapsulate LLVM bytecode files inside of another object format (such as
ELF). However, I'd like to pose a few potential benefits that
encapsulation in ELF might provide:

1) It may provide a way for standard UNIX tools to handle bytecode files
without modification. For example, programs like ar, nm, and file all
take advantage of the ELF format. If we generated LLVM ELF files, we
wouldn't need to write our own nm and ar implementations and port them
to various platforms.

Consider this: both ar and nm look inside the .o file and read the ELF
format. While we could put the bytecode in a .llvm section, neither tool
would read that section. They would instead look for symbols in other
sections. So, to be useful, we would now have to bloat the .o file with
additional (normal) ELF sections that would allow tools like ar and nm
to discover the symbols in the file. I think this is a big waste of
time when we already have ar and nm replacements.

As for the file command, the /etc/magic file can contain a single line
that accurately identifies LLVM object files (first 4 chars are llvm)

2) It could mark the bytecode file with other bits of useful
information, such as the OS and hardware on which the file was generated.

That's currently supported with the "target-triple" I just added to the
bytecode format.

3) It may provide a convenient means of adding dynamic linking with
other bytecode files.

What did you have in mind?

4) It may provide a convenient place to cache native translations for
use with the JIT.

It doesn't sound convenient to me. It would be faster to just mmap a
whole region of memory with some kind of index onto disk and reload it
later.

Here are the disadvantages I see:

1) Increased disk usage. For example, symbol table information would
duplicate the information already in the bytecode file.

Right ;>

o Compiler Driver Name

I'd vote for either llvmcc (llvm compiler collection) or llvmcd (llvm
compiler driver). To be more convenient, we could call it llc (LLvm
Compiler) or llcd (LLvm Compiler Driver). Calling it llc would require
renaming llc to something else, which might be appropriate since I view
llc as a "code generator" and not as a "compiler" (although both terms
are technically accurate).

I'd vote to leave llc alone. However, I like your shortening idea. It
makes ll-build much more tenable.

I thought I would chime in with some ideas and opinions:

o Configuration Files

If it isn't too much trouble, I think we should go with XML for the
following reasons:

1) We wouldn't need to implement a parsing library. There are several
XML parsing libraries available, and I'm guessing that they're available
in several different programming languages (Reid, am I right on that?).

So that's the tension: with XML, there are lots of off-the-shelf tools
that you can use to parse it. OTOH, this should be an extremely trivial
file that does not need any parsing per-say. Unless there is a *clear*
advantage to doing so, we should not replace a custom 20 LOC parser with a
gigantic library.

2) It makes it easier for other programmers to write tools that read,
modify, and/or write the configuration file correctly. If my assumption
about XML libraries being available in several different languages is
correct, then that means we don't need to write a library for each
language that people want to use.

I don't buy this at all. In particular, these files are provided by
front-end designers for the sole consumption of the driver. NO other
tools should be looking in these files, they should use the compiler
driver directly.

3) I believe it would keep the format flexibile enough for future
expansion (but again, Reid would know better here).

You can do this with any format you want, just include an explicit version
number.

Having configuration files that can be manipulated accurately is
important for things like automatic installation, GUI's, configuration
tools, etc.

Again, none of these tools should be using these files.

o Object Files

... Misha did a great job responding to these ...

4) It may provide a convenient place to cache native translations for
use with the JIT.

For native translation caching, we will just emit .so files eventually.
It is no easier to attach a .so file to an existing ELF binary than it is
to attach it to a .bc file. Also, we probably *don't* want to attach the
cached translations to the executables, though I'm sure some will disagree
strenuously with me :slight_smile: In any case, this is still a way out.

o Optimization options

I agree with the idea of using -O<number> for increasing levels of
optimization, with -O0 meaning no optimization. It's a pretty intuitive
scheme, and many Makefiles that use GCC use the -O option.

The problem is that -O0 does *not* mean no optimization. In particular,
with GCC, -O0 runs optimizations that reduce the compile time of the
program (e.g. DCE) that do not impact the debuggability of the program.
Making the default option be -O1 would help deal with this, but I'm still
not convinced that it's a good idea (lots of people have -O0 hard coded
into their makefiles). *shrug*

-Chris

I don't see why we have to maintain 100% compatibility with GCC. We're
so incompatible in so many other ways that I don't see it as a
necessity. For example, we probably won't have all the -f and -X and -W
options that GCC does. So, why can't we just DEFINE the optimization
levels and be done with it? Its not like users of LLVM can just use
their existing makefiles, they will have to make some adjustments. Also,
I don't know of very many people that use -O0. Typical usage is either
no -O option on the command line or -O2, -O3. In those typical use cases
the driver will give them basically what they expect. So I propose:

-O0 = zero optimization, raw output from the front end
-O1 = default fast/lightweight optimization, emphasis on
        making compilation faster
-O2 = moderate/standard optimization that make significant improvements
        in generated code but don't take significant computation time
        to optimize
-O3 = aggressive optimization, regardless of computation time with the
        effect of producing the fastest executable
-O4 = life-long optimization, includes -O3 but also profiles and
        re-optimizes

Reid.

Reid Spencer wrote:

o Configuration Files

If it isn't too much trouble, I think we should go with XML for the following reasons:

1) We wouldn't need to implement a parsing library. There are several XML parsing libraries available, and I'm guessing that they're available in several different programming languages (Reid, am I right on that?).

Yes, there are many to choose from. But, some of them are larger than
LLVM :). We'd choose expat (fast, simple, small, good compatibility,
lacks features we don't need). Either that or just write a really simple
recursive descent parser.

2) It makes it easier for other programmers to write tools that read, modify, and/or write the configuration file correctly. If my assumption about XML libraries being available in several different languages is correct, then that means we don't need to write a library for each language that people want to use.

Not sure what you mean here. What's an XML library and are they supposed
to be available in different natural languages or different computer
languages or programming languages or ??Do you mean natural languages?

I meant programming languages. Python already has interfaces to XML. I bet Perl has a module to parse XML too.

3) I believe it would keep the format flexibile enough for future expansion (but again, Reid would know better here).

Yes. It wouldn't be painless, but going from DTD1 -> DTD2 is much less
painful than going from INI -> XML. That is, the ENTIRE format doesn't
have to change, its just incrementally changing its document type
definition within the XML format.

Having configuration files that can be manipulated accurately is important for things like automatic installation, GUI's, configuration tools, etc.

Yes, that was my main argument too .. precision for us and others.

My general impression is that when one rolls their own file format, others write shell scripts that handle the common cases but usually foul up on corner cases. I figured XML would reduce the likelyhood that this scenario would happen.

Of course, with XML, programs could still do things incorrectly, but it would be easier to get it right.

o Object Files

I've noticed that there's a general agreement that we should not encapsulate LLVM bytecode files inside of another object format (such as ELF). However, I'd like to pose a few potential benefits that encapsulation in ELF might provide:

1) It may provide a way for standard UNIX tools to handle bytecode files without modification. For example, programs like ar, nm, and file all take advantage of the ELF format. If we generated LLVM ELF files, we wouldn't need to write our own nm and ar implementations and port them to various platforms.

Consider this: both ar and nm look inside the .o file and read the ELF
format. While we could put the bytecode in a .llvm section, neither tool
would read that section. They would instead look for symbols in other
sections. So, to be useful, we would now have to bloat the .o file with
additional (normal) ELF sections that would allow tools like ar and nm
to discover the symbols in the file. I think this is a big waste of
time when we already have ar and nm replacements.

In reply to Misha's comment, this is how nm and ar would work without modification: the symbol information would have to be duplicated in the ELF section that holds the symbol table.

Let me back up for a minute. As far as LLVM object files and executables go, here's the features that I would want, in order of importance:

1) Automatic execution of bytecode executable files.

I would like to be able to run bytecode files directly, the same way I can run a shell script, Python program, or ELF executable directly. I think having to specify an interpreter on the command line (like java program.class) or having to enter a different execution environment (llee /bin/sh) is inconvenient and doesn't integrate into the system as well as it could.

2) Integration with system tools.

It would be nice if a common set of tools could manipulate bytecode files. Having the system ar, nm, and file programs work on bytecode and native code object files would be great. Having LLVM provided versions that do the same thing would be second best. A parallel set of LLVM tools is third best.

ELF encapsulation gets us #2, which, at this point, I think I'm willing to say isn't all that important. I think LLVM provided tools will do.

In regards to Misha's comments about the automatic execution of bytecode files, there are several ways to do it:

1) Have bytecode files start with #!<JIT/llee/whatever> (portable)
2) Encapsulate with ELF
3) Register the type with the kernel (Linux only)

I don't really care for the llee approach, as it can be broken with subsequent LD_PRELOADs, requires that I enter an alternative execution environment, and requires that I remember to run llee. I believe the methods above are less error-prone and integrate into the system more cleanly.

-- John T.

So I propose:

[snip]

-O3 = aggressive optimization, regardless of computation time with the
        effect of producing the fastest executable

I would suggest splitting -O3 into 2 or more levels of optimization,
because as written, -O3 sounds pretty scary: "regardless of computation
time", and given some people who thing that several minutes of compile
time is acceptable, I think it's useful to split it into "aggresive
opt", "aggresive interprocedural opt", and "aggressive interprocedural
analysis with interprocedural opt".

Chris Lattner wrote:

I thought I would chime in with some ideas and opinions:

o Configuration Files

If it isn't too much trouble, I think we should go with XML for the
following reasons:

1) We wouldn't need to implement a parsing library. There are several
XML parsing libraries available, and I'm guessing that they're available
in several different programming languages (Reid, am I right on that?).

So that's the tension: with XML, there are lots of off-the-shelf tools
that you can use to parse it. OTOH, this should be an extremely trivial
file that does not need any parsing per-say. Unless there is a *clear*
advantage to doing so, we should not replace a custom 20 LOC parser with a
gigantic library.

2) It makes it easier for other programmers to write tools that read,
modify, and/or write the configuration file correctly. If my assumption
about XML libraries being available in several different languages is
correct, then that means we don't need to write a library for each
language that people want to use.

I don't buy this at all. In particular, these files are provided by
front-end designers for the sole consumption of the driver. NO other
tools should be looking in these files, they should use the compiler
driver directly.

I don't believe this is realistic. This is a configuration file that tells the driver how to compile stuff. There is a definite chance that it will need to be modified as parts of the compiler are updated, replaced, or removed.

Think of installing a new frontend. It would be nice if its installation could automatically insert itself into the driver's configuration file.

Or how about writing a program that prints the compiler's configuration to stdout?

Or an administrator who wants to write a quick program to re-configure the compiler on several different machines he administrates?

I think we have two choice to make these operations convenient. Either we provide command line tools for modifying the configuration, or make the file's format in such a way that these tools can be easily and accurately written by others on an on-demand basis.

3) I believe it would keep the format flexibile enough for future
expansion (but again, Reid would know better here).

You can do this with any format you want, just include an explicit version
number.

Having configuration files that can be manipulated accurately is
important for things like automatic installation, GUI's, configuration
tools, etc.

Again, none of these tools should be using these files.

o Object Files

... Misha did a great job responding to these ...

4) It may provide a convenient place to cache native translations for
use with the JIT.

For native translation caching, we will just emit .so files eventually.
It is no easier to attach a .so file to an existing ELF binary than it is
to attach it to a .bc file. Also, we probably *don't* want to attach the
cached translations to the executables, though I'm sure some will disagree
strenuously with me :slight_smile: In any case, this is still a way out.

o Optimization options

I agree with the idea of using -O<number> for increasing levels of
optimization, with -O0 meaning no optimization. It's a pretty intuitive
scheme, and many Makefiles that use GCC use the -O option.

The problem is that -O0 does *not* mean no optimization. In particular,
with GCC, -O0 runs optimizations that reduce the compile time of the
program (e.g. DCE) that do not impact the debuggability of the program.
Making the default option be -O1 would help deal with this, but I'm still
not convinced that it's a good idea (lots of people have -O0 hard coded
into their makefiles). *shrug*

-Chris

-- John T.

Unfortunately, the #!... convention is not supported on all operating
systems although it is very common on UNIX. I think we're going to end
up with a mixture of things:

1. The llee (llvm-run) approach needs to be maintained for those systems
    where all you can do is run a program (think OS/390, Windows, etc.)
2. We can do the #! trick now without modifying the bytecode file. We
    have a convention like this:
    #!/path/to/llvm-run -
    llvm......(bytecode)
    When llvm-run is given the - option, it reads the rest of the file
    as bytecode. This is how a shell works too.
3. We might want to eventually have an installer that registers the type
    with the kernel but I think that's a long way off. We should
    concentrate effort on items 1. and 2. above.

I don't think we need to do any encapsulation with ELF to accomplish the
same goals.

Reid.

Okay, sounds good. How about:

-O3agg
-O3ipo
-O3aggipo

:slight_smile: ?

Reid.

I was thinking more like -O3, -O4, -O5, and the "super-duper run-time
life-long optimization" is -O6 :slight_smile:

Reid Spencer wrote:

In regards to Misha's comments about the automatic execution of bytecode files, there are several ways to do it:

1) Have bytecode files start with #!<JIT/llee/whatever> (portable)
2) Encapsulate with ELF
3) Register the type with the kernel (Linux only)

I don't really care for the llee approach, as it can be broken with subsequent LD_PRELOADs, requires that I enter an alternative execution environment, and requires that I remember to run llee. I believe the methods above are less error-prone and integrate into the system more cleanly.

Unfortunately, the #!... convention is not supported on all operating
systems although it is very common on UNIX. I think we're going to end
up with a mixture of things:

Yes, I know. I figured other OS's have some other sort of mechanism to do the same thing.

1. The llee (llvm-run) approach needs to be maintained for those systems
    where all you can do is run a program (think OS/390, Windows, etc.)

Doesn't Windows have some method of associating extensions to executable programs?

2. We can do the #! trick now without modifying the bytecode file. We
    have a convention like this:
    #!/path/to/llvm-run -
    llvm......(bytecode)
    When llvm-run is given the - option, it reads the rest of the file
    as bytecode. This is how a shell works too.
3. We might want to eventually have an installer that registers the type
    with the kernel but I think that's a long way off. We should
    concentrate effort on items 1. and 2. above.

I don't think we need to do any encapsulation with ELF to accomplish the
same goals.

You are right. ELF encapsulation was more for the second set of goals (i.e. working with pre-existing system tools).

Reid.

------------------------------------------------------------------------

_______________________________________________
LLVM Developers mailing list
LLVMdev@cs.uiuc.edu http://llvm.cs.uiuc.edu
http://mail.cs.uiuc.edu/mailman/listinfo/llvmdev

-- John T.

>Unfortunately, the #!... convention is not supported on all operating
>systems although it is very common on UNIX. I think we're going to end
>up with a mixture of things:

Yes, I know. I figured other OS's have some other sort of mechanism to
do the same thing.

They do *something*, the point is adding cruft into the bytecode format
will never handle *all* cases, we'll still have platform-specific
additional hacks. So it's not worth polluting the bytecode with these
hacks.

>1. The llee (llvm-run) approach needs to be maintained for those systems
> where all you can do is run a program (think OS/390, Windows, etc.)

Doesn't Windows have some method of associating extensions to executable
programs?

Certainly, and you can do that TODAY, without modifying the bytecode at
all to include #! lines: just associate .bc with lli and you're done.

I don't see why we have to maintain 100% compatibility with GCC. We're
so incompatible in so many other ways that I don't see it as a
necessity.

We are? Currently you can just 'configure CC=llvmgcc' and stuff works.

For example, we probably won't have all the -f and -X and -W
options that GCC does. So, why can't we just DEFINE the optimization
levels and be done with it? Its not like users of LLVM can just use
their existing makefiles, they will have to make some adjustments.

They currently don't have to make these changes. The -f flags that don't
make sense we can just ignore. We support all of the -W flags I think.

Also, I don't know of very many people that use -O0. Typical usage is
either no -O option on the command line or -O2, -O3. In those typical
use cases the driver will give them basically what they expect.

I agree this is much more common.

So I propose:

-O0 = zero optimization, raw output from the front end
-O1 = default fast/lightweight optimization, emphasis on
        making compilation faster

That is fine, but my point is that NO USER will ever care about -O0.
Since this is the case, why expose it at all? It should only be exposed
for LLVM compiler hackers, which is why I suggested -On (later ammended to
-Onone). I would not have a problem with it really being named something
without a -O prefix (-really-give-me-what-the-front-end-spits-out).

Given that, either -O0 and -O1 should do the same thing, or we should drop
one.

-Chris

> I don't buy this at all. In particular, these files are provided by
> front-end designers for the sole consumption of the driver. NO other
> tools should be looking in these files, they should use the compiler
> driver directly.

I don't believe this is realistic. This is a configuration file that
tells the driver how to compile stuff. There is a definite chance that
it will need to be modified as parts of the compiler are updated,
replaced, or removed.

Yes, exactly. But this is strictly for communication between the
front-end and the compiler driver, nothing more, nothing less. We want
the driver to be able to evolve without the front-ends having to be
upgraded. As such, we could just have a version number if needed.

Think of installing a new frontend. It would be nice if its
installation could automatically insert itself into the driver's
configuration file.

It would just add a fixed file to a directory, not modify any existing
files.

Or how about writing a program that prints the compiler's configuration
to stdout?

llvm-driver -print-compiler-configuration

Or an administrator who wants to write a quick program to re-configure
the compiler on several different machines he administrates?

You're missing the point. The driver is the interface to the program, not
these text files.

I think we have two choice to make these operations convenient. Either
we provide command line tools for modifying the configuration, or make
the file's format in such a way that these tools can be easily and
accurately written by others on an on-demand basis.

No, if people are trying to do these things, it is a sign that we have
done something fatally wrong.

-Chris

How about we figure it out as it gets closer :slight_smile:

-Chris

I forgot about some important options for dealing with makefiles:

the GCC -M,and -M* options.

These assist in generating correct header file dependencies. These are
important for C/C++ but not for many other languages. However, it would
still be nice if we could have the compiler driver (eventually ) emit
makefile dependencies based on actual source read.

Is this a future requirement for the driver?

Reid

This should just be a flag that is passed into the front-end.

-Chris