line. The same “Constant:i32<0>” node I see in my toy backend, which forces me to add a pattern that lowers it using “xor reg,reg”. Much like “or g0,g0” pattern in SPARC.
However, I don’t see that Constant node when compiling using X86 backend. How does it achieve this? And why initial DAGs are different at all? I got impression that initial DAG is fully target-independent, so these DAG should be the same before starting ISel. Am I wrong?
The selection DAG is very much target-specific. The differences in the initial DAG usually come from lowering function arguments and return values, and from lowering calls to other functions. This is where different calling conventions are applied, so the initial DAG may be different even for the same target if you change the calling convention.
Later on more differences appear from legalization (which each target needs to customize to match its needs), and custom DAG combines. All of this happens before the actual selection process starts.
To answer this---this seems to be a part of the return sequence, i.e. the part of the calling convention that dictates how a function passes return values to its caller. This is handled via LowerReturn in the target lowering object. Check SparcTargetLowering::LowerReturn for the Sparc implementation.
Thanks Tim and Krzysztof for pointing me in the right direction.
Indeed, when I started my backend I just blindly copied LowerFormalArguments and LowerReturn from SPARC backend, and that’s where these FrameIndex’es and Constant’s are coming from.
However, I haven’t managed to get a “Constant<>” in the DAG when compiling for X86. I’m interested in how it is lowered. Can you please give me some guidance on this?
How are you looking? When I run "llc -mtriple=x86_64-linux-gnu
-debug-only=isel" on your IR I get multiple instances of Constants. At
the very start is:
Changing optimization level for Function main
Before: -O2 ; After: -O0
FastISel is enabled
=== main
Enabling fast-isel
Total amount of phi nodes to update: 0
*** MachineFunction at end of ISel ***
Machine code for function main: IsSSA, TracksLiveness
Frame Objects:
fi#0: size=4, align=4, at location [SP+8]
fi#1: size=4, align=4, at location [SP+8]
fi#2: size=8, align=8, at location [SP+8]
Function Live Ins: %edi in %0, %rsi in %2
Ah, I think I can guess what's happening. I assume your 1.ll is
Clang's output, and you used the default optimization level (which is
-O0).
That means your function is actually tagged as "optnone" and LLVM
tries to use a different instruction selector called "FastISel" rather
than create a DAG at all. This speeds up compilation and improve the
debug experience, but not all targets support it. SPARC falls back to
the DAG because FastISel can't handle the function, but x86 is getting
through it ever creating a DAG.
To see the X86 DAG you can either remove the "optnone" attribute from
the .ll file or override the selector on the llc command-line:
-fast-isel=0.