Hello,
I understand shared linked is currently a lot slower than static linked.
However, for debug build, where speed is not required. Shared link has
A lot of advantage.
Static linked clang/llvm on linux is 17G, shared build is only 1.5G.
Assuming you're talking about a debug build, you can probably get that down
a lot with at least -fdebug-types-section or with -gsplit-dwarf.
On a mainstream
linux host machine with 12G memory, only –j1 can be used to finish final
linking step.
If you use Ninja, I believe it has a separate pool limit you can tweak for
linking (but, again, with -fdebug-types-section or -gsplit-dwarf, you might
make the link steps more palatable (-gsplit-dwarf is probably the
better/more important choice, though))
But I'd be surprised if the limit is that low - are you using gold? Or
binutils ld? Gold is a good/better choice here, and usually I only end up
using about a GB of RAM per linking process I Think.
With –j8 the linux will link the linking. With 17G footprint, copying from
build folder
to install folder is also very slow.
Solid state, on spinning disks? Solid state would help a bit.
So it is very reasonable to set
BUILD_SHARED_LIBS=ON as default for debug build because it would improve
usability
and performance.
The thing is, the performance tradeoff also hits test execution time. At
least last time I checked (granted, on a 32 core, 64GB of RAM machine) the
build time improvement (on a clean build) with shared libs was not saved by
a single execution of the regression test suite (which took longer than the
savings). Let alone on an incremental build... if I remember my numbers
correctly.
Not sure whether it's the right call to change the default - people can
change their own settings easily enough.
- Dave