gcc in c++

Hi,

Have you seen this: Ian Lance Taylor - gcc-in-cxx branch created
There is a new branch for converting gcc to C++.

Best regards,
--Edwin

The sad thing is that they seem to be replacing one unsafe language with
another, presumably with enormous effort.

The only hopeful sign in that thread is the proposal for using Cyclone
instead, starting here:Ivan Levashew - Re: gcc-in-cxx branch created

But it is summarily tossed out, without examination:
Ian Lance Taylor - Re: gcc-in-cxx branch created

If all we can ever use are languages that everyone knows, there will be
no progress.

-- hendrik

Another perspective on this sort of issue, if you haven't seen it:
http://weblogs.mozillazine.org/roc/archives/2007/10/tablua_fracta.html.

Part of what makes LLVM interesting is that there's the potential to
put together a new language on top of LLVM relatively easily, and get
powerful optimizers and cross-platform compatibility without having to
write any middle-end and back-end code. I think this really has the
potential to allow writing new languages and higher-quality
implementations for existing languages that can compete with C/C++ in
places where C/C++ traditionally dominate.

-Eli

If we apply their logic to llvm, we can dominate, if we just target 8 cores. :slight_smile:

Hi,

Have you seen this: Ian Lance Taylor - gcc-in-cxx branch created
There is a new branch for converting gcc to C++.

Best regards,
--Edwin

The sad thing is that they seem to be replacing one unsafe language with
another, presumably with enormous effort.

The only hopeful sign in that thread is the proposal for using Cyclone
instead, starting here:Ivan Levashew - Re: gcc-in-cxx branch created

But it is summarily tossed out, without examination:
Ian Lance Taylor - Re: gcc-in-cxx branch created

As it should be.

If all we can ever use are languages that everyone knows, there will be
no progress.

Who is we? I haven't seen any patches from you to GCC. I grepped all
the Changelogs.

In any case, you will only make no progress by your definition of progress.
It would certainly be progress by the GCC project's definition of progress.

You are more than free to help in your goal of getting GCC written in
a new languages by moving GCC yourself, and showing it has serious
advantages.
See how much support you get.
You certainly won't get support simply by doing what Ivan did, which
is never contribute real work yet suggest we all do something wildly
different just because they think it is better.

The reality is GCC is moving to C++ because it's community of active
developers wants to move to C++. You can argue this is a bad idea.
The consensus is that it isn't a bad idea. The consensus is also that
our time is better spent on coding our compiler than evaluating and
learning new languages. If someone came along and showed us we could
do all we ever wanted with some whizz bang new language and it would
require minimal effort on our part, i'm sure we'd use it.
Nobody has done this.

You certainly will never achieve the goal of getting people to use new
languages by sitting on the sidelines and saying how sad it is.

--Dan

Arggh, my bad.
I thought he had posted this stuff to the gcc mailing list, sorry for
the offtopic intrusion.

That's not so far from the truth... a high-quality parallelized
LTO+codegen implementation would really set LLVM apart from other
current compilers.

-Eli

Does this mean llvm can not dominate if llvm target 1 core machine also ?

Making an optimizer/code generator parallel is fundamentally a lot
easier than making a browser parallel because the problems parallelize
a lot more naturally. There are essentially two chunks of code in the
llvm pipeline that take up large amounts of time: the optimization
passes and the code generator. I think parallelizing both of these is
feasible with conventional parallelism techniques, with very little
penalty for the single-core case.

That said, that wasn't the point I was trying to make by linking to
that blog post; I was really trying to point in the direction of
building new programming tools with LLVM. Parallelizing a web-browser
to many cores involves much more difficult issues.

-Eli

If we apply their logic to llvm, we can dominate, if we just target 8
cores. :slight_smile:

Does this mean llvm can not dominate if llvm target 1 core machine
also ?

Making an optimizer/code generator parallel is fundamentally a lot
easier than making a browser parallel because the problems parallelize
a lot more naturally.

It is hard for me to agree or disagree because I do not know typical web browser architecture.

There are essentially two chunks of code in the
llvm pipeline that take up large amounts of time: the optimization
passes and the code generator. I think parallelizing both of these is
feasible with conventional parallelism techniques, with very little
penalty for the single-core case.

That said, that wasn't the point I was trying to make by linking to
that blog post; I was really trying to point in the direction of
building new programming tools with LLVM. Parallelizing a web-browser
to many cores involves much more difficult issues.

host vs. target

If LLVM itself is parallelized than it can spit out optimized code faster if the host has multiple cores. This is a win for developers who use LLVM based compiler, e.g. web browser developers.

If LLVM targets machines with multiple cores then the generated code runs faster on machines with 8 cores. This is a big one for end users whose application is compiled using LLVM based tools., e.g. web browser users. Mike is well versed in compiler host/target terminology so I asked him the question.

This is also something I'm interested in supporting. The whole design of the FunctionPassManager and CallGraphSCCPassManager (e.g. inliner) is to allow parallelism of optimizations between different parts of the program.

There are three main problems with this:

1) There are various places in LLVM that use globals, e.g. to unique types
    and for a couple things in the code generator. This should be easy to
    synchronize on or eliminate.

2) Various passes poke at the module to get things like intrinsics,
    function declarations, etc. I think this is easy to solve with locking
    and would be easy.

3) The use/def chain machinery in LLVM violates the principle that local
    manipulation doesn't touch global objects. This is because
    *everything* has use/def chains, including global variables and even
    constants (e.g. 'i32 2'). Solving this is much trickier than #2, but
    seems feasible with some tricky atomic access algorithms.

-Chris

If we apply their logic to llvm, we can dominate, if we just
target 8
cores. :slight_smile:

Does this mean llvm can not dominate if llvm target 1 core machine
also ?

Making an optimizer/code generator parallel is fundamentally a lot
easier than making a browser parallel because the problems parallelize
a lot more naturally.

It is hard for me to agree or disagree because I do not know typical
web browser architecture.

Okay... in case you're interested, I can give a quick description of
some of the issues: first is the pipeline of actually rendering a
page, parsing->DOM construction->style resolution->layout->painting,
which is essentially serial; within the parsing, DOM construction, and
layout stages, it's very difficult to do parallelization because later
pieces of the document depend on earlier pieces. (This is fast enough
single-threaded to not be a serious issue on a desktop computer these
days, but it's still an issue on lower-power devices, and for heavy
DOM manipulation.) Another is that Javascript is fundamentally
single-threaded, so it's extremely difficult to parallelize. And
multiple pages can interact with each other's state, which complicates
things even further.

There are essentially two chunks of code in the
llvm pipeline that take up large amounts of time: the optimization
passes and the code generator. I think parallelizing both of these is
feasible with conventional parallelism techniques, with very little
penalty for the single-core case.

That said, that wasn't the point I was trying to make by linking to
that blog post; I was really trying to point in the direction of
building new programming tools with LLVM. Parallelizing a web-browser
to many cores involves much more difficult issues.

host vs. target

If LLVM itself is parallelized than it can spit out optimized code
faster if the host has multiple cores. This is a win for developers
who use LLVM based compiler, e.g. web browser developers.

If LLVM targets machines with multiple cores then the generated code
runs faster on machines with 8 cores. This is a big one for end users
whose application is compiled using LLVM based tools., e.g. web
browser users. Mike is well versed in compiler host/target terminology
so I asked him the question.

Oh, okay... sorry to drag your question off-topic.

-Eli