Cleaning up ARM parts of the Clang driver


I’m doing some cleaning up of the target-specific hacks in the Driver, as prep for some cross-compilation changes discussed previously on the list.

My previous set of patches tablegen’d up a lot of the horrible string matching, but I’ve realised that actually it’s almost all superfluous and can be deleted completely.

That said, there are some defaults and corner cases that I’d like the list’s (and especially Douglas, Chandler and Eric’s) opinions on before I go any further. This all relates to the ARM specific stuff – I can’t really touch the Intel, MIPS or PowerPC stuff so it’d be better if someone else cleans that up if they feel like it. A lot of these changes revolve around not trying to be too clever in the frontend driver and leaving defaults to more specialised parts of the back ends.

Does anyone have any opinions or answers to the following? Thanks!

Hi James,

I completely agree there’s a lot of cleanup that can be done with this stuff. Thanks for working on it!


// Select the ABI to use.
// FIXME: Support -meabi.
const char *ABIName = 0;
if (Arg *A = Args.getLastArg(options::OPT_mabi_EQ)) {
ABIName = A->getValue(Args);
} else {
// Select the default based on the platform.
switch(Triple.getEnvironment()) {
case llvm::Triple::GNUEABI:
ABIName = “aapcs-linux”;
case llvm::Triple::EABI:
ABIName = “aapcs”;
ABIName = “apcs-gnu”;

I don’t really think the target ABI needs to be set if it is part of the triple (the else clause) – that should be inferred by all other tools by the triple.

This sounds reasonable.

2: The driver currently tries to pick a default CPU for the architecture if none is specified. I think this is wrong – if no CPU is specified then the compiler should not tune for a specific CPU. The user would expect if specifying –arch armv7, for the compiler to give an output that has good blended performance on all armv7 cores, not cortex-a8 specifically for example.

This would be a change, at least for Darwin, from what user’s have expected since pre-llvm days. It would substantially break things very badly.

More philosophically, I believe there should always be a CPU, default or otherwise. If we want a “generic v7 core” target, then we should explicitly define one as a pseudo-CPU and allow folks to specifically select that. Anecdotally, I’ve done that in previous compilers I’ve worked on and it worked out well.

3: Selecting the floating point ABI is a rather large set of switch-cases which I think can be contracted into a simple:

// If unspecified, choose the default based on the platform.
if (FloatABI.empty()) {
const llvm::Triple &Triple = getToolChain().getTriple();
if (Triple.getOS() == llvm::Triple::Darwin &&
(ArchName.startswith(“v6”) || ArchName.startswith(“v7”))
FloatABI = “soft”;
FloatABI = “softfp”;

I also don’t see why the current default is “soft” if no ABI is specified. Why not “softfp”? If the target has hardware FP support that’ll be used, if not then it is implied that the ABI is “soft”. I also don’t know how critical that Darwin special-case is. If it could be gotten rid of, that’d be lovely.

I’m not 100% up on all the details of the floating point abi naming, so grain of salt, but that Darwin default looks really strange, if for no other reason the special casing of v6 and v7. Evan and/or Bob would know better than I what the right logic is there.

4: These are hacks:

// Use software floating point operations?
if (FloatABI == “soft”) {

// Use software floating point argument passing?
if (FloatABI != “hard”) {

They’re referenced from lib/Basic/Targets.cpp – would it not be better for Targets.cpp to infer these from the other passed parameters? That’s actually a self-answering question, so my question really is: “Is there any reason why I can’t make the trivial change to make this so?”

No opinion.

// Set the target features based on the FPU.
if (FPU == “fpa” || FPU == “fpe2” || FPU == “fpe3” || FPU == “maverick”) {

Where did these names come from? Do people actually use them? I see google results for GCC command lines specific to a Cirrus core – is that the reason they’re there? Can we get rid of them as nasty GCC relics?

Pretty sure they’re gcc legacy, yeah. I don’t know if they’re being used or what the fallout would be to removing them, though.

// Setting -msoft-float effectively disables NEON because of the GCC
// implementation, although the same isn’t true of VFP or VFP3.
if (FloatABI == “soft”) {

It’s not evident why this code is here. Why does –msoft-float disable NEON? NEON has an integer vectoriser too. GCC relic?

It’s because -msoft-float says “don’t use floating point registers”, not just “don’t use floating point.” Consider, for example, kernel or interrupt handler code that doesn’t save/restore the VFP/NEON registers. The naming of the command line option, however, is entirely legacy and I’ll grant it’s not the best.

// Setting -mno-global-merge disables the codegen global merge pass. Setting
// -mglobal-merge has no effect as the pass is enabled by default.
if (Arg *A = Args.getLastArg(options::OPT_mglobal_merge,
options::OPT_mno_global_merge)) {
if (A->getOption().matches(options::OPT_mno_global_merge))

It seems like this should be better handled by the sys::cl code in llc.

I don’t follow. This is to allow -mno-global-merge directly from the clang command line, right? If it’s in the llc bits, it would need to be “-mllvm -no-global-merge” or something like that instead, which isn’t intended to be a user-level mechanism.