alternate clang driver

I don't buy the C++ is faster that Python argument. It's just a driver
for a compiler! You could write it in Turing machine primitives and it
would be super fast on a modern computer. It's not computing the
strongly connected components of a terabyte sized graph.

The issue is start-up time. It takes longer to launch the python process than it does for the entire compilation and code generation process to happen on small C files at -O0.

I completely agree. The startup time for python was a huge problem for the *first* clang driver, which happened to be written about Python.

FWIW, Reed contacted me about this work back in May. I encouraged him to contribute to improving the main Clang driver, and he wasn't interested. It's perfectly fine for him to go off and do something different, but this work clearly isn't interesting to mainline clang development. I don't see why we're still discussing it :slight_smile:

-Chris

I didn't say that I'm not interested to fix this in mainline clang .

I needed something right away and it was not practical to try and sort out all of this in mainline clang in a timely fashion.

I fully understand that one sometimes has to do the "wrong" long-term thing to make a short-term deadline, and understand that you needed the driver to do something that the clang driver didn't already do.

However, for it to go into mainline clang, it has to be better at what the clang driver *already* does, or it is a regression. As compile times are very important for the project, any serious proposal to replace the existing driver with something written in python is a non-starter.

At this point I think it's better for me to continue my prototype in Python.

Sure, go for it. I just wanted to clear up apparent confusion where some people thought you were proposing it as a replacement for the existing driver.

-Chris

22.10.2011, 01:47, "David Chisnall":

Adding a dependency on Python (or Lua, or what other buzzword scripting language that you favour this week) for invoking an [Objective-]C[++] compiler seems to redefine overkill.

To the defense of Lua, well-written Lua code is almost as fast as C without any JIT, and original interpreter is an executable sized under 200K. So it's no more 'overkill' that shell scripts used by autotools, which are definitely slower.

22.10.2011, 01:47, “David Chisnall”:

Adding a dependency on Python (or Lua, or what other buzzword scripting language that you favour this week) for invoking an [Objective-]C[++] compiler seems to redefine overkill.

To the defense of Lua, well-written Lua code is almost as fast as C without any JIT, and original interpreter is an executable sized under 200K. So it’s no more ‘overkill’ that shell scripts used by autotools, which are definitely slower.

Here you’re comparing autotools to clang?
This discussion is starting to make me feel sick. Clang needs a driver, to find headers, platform/os/runtime libraries and drive a linker. No more, no less, and this certainly does not warrant additional dependencies in my eyes. I’m think a BSD-licensed clone/improvement of autotools may have a place as an LLVM subproject if good and useful enough (which would be extremely hard to do), but not in Clang, period.

Ruben

Of course no, I'm not. I was referring to shell wrappers like libtool, usually used in conjunction with autootools.

Konstantin Tokarev <annulen@yandex.ru>
writes:

22.10.2011, 01:47, "David Chisnall":

Adding a dependency on Python (or Lua, or what other buzzword

scripting language that you favour this week) for invoking an
[Objective-]C[++] compiler seems to redefine overkill.

To the defense of Lua, well-written Lua code is almost as fast as C
without any JIT, and original interpreter is an executable sized under
200K.

Agreed; Lua is quite suitable for this of task, much more so than many
other scripting languages. It's very fast, very small, and very
portable.

[Indeed, the Lua source is small enough (and liberally licensed
enough) that it be perfectly reasonable to have a copy of Lua in the
clang source tree to avoid an additional dependency...]

-Miles

autotools is great when it works, but it only works if a POSIX shell
is available, and even then it osten fails quite spectacularly if you
try to use it on a platform that the original maintainer did not
anticipate.

autotools has my vote for the very, very worst Open Source software
there is. Whenever I encounter a README or INSTALL FILE that says to
just do "./configure ; make ; sudo make install" I feel sick.

ZooLib has never used a command-line driven configuration. Everything
it needs is taken care of in a C++ header file. It has never had any
problem at all building on platforms like the Classic Mac OS that had
no shell of any sort. I've built ZooLib on lots of different
platforms and never had any trouble with it, while I've had no end of
trouble with autotools.

   http://www.zoolib.org/

It is Open Source under the MIT License.

2011/10/26 Don Quixote de la Mancha <quixote@dulcineatech.com>

. I’m think a BSD-licensed clone/improvement of autotools may have a
place as an LLVM subproject if good and useful enough (which would be
extremely hard to do), but not in Clang, period.

autotools is great when it works, but it only works if a POSIX shell
is available, and even then it osten fails quite spectacularly if you
try to use it on a platform that the original maintainer did not
anticipate.

Agreed. This clone I hypothetically mention that would be useful enough would throw overboard that shell and anticipation dependency.

autotools has my vote for the very, very worst Open Source software
there is. Whenever I encounter a README or INSTALL FILE that says to
just do “./configure ; make ; sudo make install” I feel sick.

Agreed. It’s hell to modify and fix the scripts when something doesn’t work quite right.

ZooLib has never used a command-line driven configuration. Everything
it needs is taken care of in a C++ header file. It has never had any
problem at all building on platforms like the Classic Mac OS that had
no shell of any sort. I’ve built ZooLib on lots of different
platforms and never had any trouble with it, while I’ve had no end of
trouble with autotools.

http://www.zoolib.org/

It is Open Source under the MIT License.

Cool, that’s how I think as well, and that’s pretty much how Boost works (apart from its build tool itself). WebKit does the same. Most configury checks are superfluous anyway on the most common platforms people write their code for. This is getting off-topic though, so I’m going to shut up now :slight_smile:

Ruben

autotools does to main things: it creates a header file called
"config.h" that has #defines for lots of system-dependent stuff, and
it generates Makefiles.

Over the years I have observed that most of what is declared in
config.h is not actually used by the codebase that depends on it.
This is because the maintainer did not customize the configure script
to test only for what is actually needed.

The Makefiles it generates look like line noise. If there is some
problem with the build, it is difficult to impossible to figure out by
reading autotools-generated Makefiles.

There are only so many automated build systems in common use: the
various IDE project formats, Visual Studio's nmake, GNU make, the
original UNIX make and so on.

It should not be that hard to hand-create Makefiles for the
most-popular build tools that are a lot easier to read and debug than
the ones that autotools generates.

As for config.h, a possible solution would be to have everyone submit
the config.h that their particular build generates, as well as the
output of something like:

   touch foo.h ; cpp -dM foo.

With GCC that prints out all the pre-defined preprocessor symbols.
The way you get them all though would be quite different for different
compilers.

Then someone could look at all the different submitted config.h's and
the different set of compiler-dependent preprocessor symbols, then
handroll just one header file that would work for everyone.

If someone wanted to port to a new target or build host, then they
would have to adjust the config.h and Makefiles apropriately.

ZooLib actually has two configuration headers that come with the
framework. All of its source code and the user's client code includes
a file called "zconfig.h", but usually zconfig.h includes a "previous
header", then some of the user's own definitions, and an "afterwards
header" whose result depends on everything that went before it was
included.

[I'm a bit wary of wading into this, as I know many people love to
rant about autotools. I think they get a lot of undeserved abuse:
yes, the implementation is gross in many ways, but as _tools_, they
actually work quite well from a user's point of view, for many uses
(OK, not if you want to compile with visual studio).]

Don Quixote de la Mancha
<quixote@dulcineatech.com> writes:

It should not be that hard to hand-create Makefiles for the
most-popular build tools that are a lot easier to read and debug than
the ones that autotools generates.

You're comparing a _source_ file ("hand written XXX") with a generated
file (autotool output). They are not the same, and should not be
compared directly.

_Trivial_ makefiles are very easy to write. Makefiles that do more
actually tend to be quite difficult to write, typically contain huge
masses of boilerplate, are hard to maintain and prone to bit-rot.

An automake input file, by contrast, is simple and easy to maintain,
but supports many features (automatic dependency generation.

As for debugging (where a direct comparison is more relevant), I think
you're just wrong. Makefiles output by autotools files are actually
not particularly hard to debug -- they're _long_, but pretty
straightforward and easy to read, and the boilerplate is typically
well-debugged upstream. Debugging them is really no harder than a
typical hand-written Makefile with the same amount of functionality
(e.g., Makefiles in Linux, git etc) -- and often simpler, because
hand-written Makefiles tend to use a lot more special functionality to
try and keep the verbosity down.

If someone wanted to port to a new target or build host, then they
would have to adjust the config.h and Makefiles apropriately.

Yeah, that's how it was done "back in the day." It was a huge pain.
There's a very good reason that autoconf adopted the approach it uses
(functionality testing instead of "per-target configs" as older
approaches usually used).

-Miles