# Simple question on sign

How do you determine if a shift is signed or not?

ashr = always signed?
lshr = always unsigned?
shl = always signed?

The CmpInst has the “isSigned()” function, but it appears that every other Instruction I’ve looked at doesn’t seem to have this.

How do you determine if a shift is signed or not?

ashr = always signed?

Essentially, yes.

lshr = always unsigned?

Essentially, yes.

shl = always signed?

Signed left shift and unsigned left shift are both shl.

http://llvm.org/docs/LangRef.html#i_shl describes the semantics of shifts.

The CmpInst has the "isSigned()" function, but it appears that every other
Instruction I've looked at doesn't seem to have this.

There isn't an isSigned() function because the query doesn't really
make sense. LLVM IR doesn't in general track whether a value is
signed or unsigned.

-Eli

how does llvm decide when to use unsigned instructions then? such as unsigned
adds and loads? I'm trying to describe some multiply shift ops and getting a
bit stuck differentiating between signed and unsigned.

sam

Eli Friedman-2 wrote:

Hi Sam,

how does llvm decide when to use unsigned instructions then? such as unsigned
adds and loads? I'm trying to describe some multiply shift ops and getting a
bit stuck differentiating between signed and unsigned.

there is no difference between signed and unsigned addition when viewed as a
bunch of bits. Suppose you take a collection of 32 bits; call this A. In C,
you can view A as a signed or unsigned 32 bit integer; call these Asigned and
Aunsigned. Take another collection B of 32 bits, and Bsigned/Bunsigned as the
signed and unsigned 32 bit C integers. Do the two additions: Asigned+Bsigned
and Aunsigned+Bunsigned. If you look at the bits making up the results you will
discover that they are exactly the same in the signed and unsigned cases. This
is why LLVM doesn't distinguish between signed and unsigned addition.

For loads: when you load a 32 bit quantity from memory you just get those same
32 bits in a register. There is no notion of signed/unsigned here, you are just
moving bunches of bits around.

Ciao, Duncan.

Hi Sam,

Whereas most languages track signedness on the variable/value level, LLVM IR
takes a more machine-like approach of having the sign apply to the
instruction rather than the value.

It is therefore the frontend (or whatever is initially producing the LLVM
IR) that should know whether an operation should be signed or unsigned.

Hopefully that makes sense,

Cheers,

James

Thanks for the replies guys but I think I should have phrased my question
better... looking at the Mips backend there are machine instructions that
operate on signed and unsigned data, such as add and addu. And like Mips, I
need to specify unsigned specific instructions, so how do these get chosen
between if the LLVM IR does not carry type data? A very general point in the
right direction is all i need and would most appreciate it. sorry if i'm
being dense.

sam

James Molloy-3 wrote:

Hi Sam,

I am not a MIPS expert by any means, so YMMV, but: MIPS addu only differs to
"add" in its (non)setting of the overflow flag. Because LLVM doesn't provide
a way via the IR to access the overflow flag, a special notation isn't
required in the IR to distinguish the two operations.

Do you have another example?

Cheers,

James

Hi James,

So does this mean if the instruction could set the overflow flag, the
instruction should not have [(set ... )] in it's pattern, i see this is the
difference in instruction description for the mips case.

I'm wondering how llvm knows when to use certain compare instructions such
as SETNE or SETUNE? And for sign or zero extending loads? I can see the
PatFrags described and the LoadExtType enum defined, and the use of zext and
sext to differentiate what containers the values are being loaded into in
the IR.

Basically I'm trying to describe patterns for automatically selecting
between various multiplication instructions:

#define MULL(t,s1,s2) t = (s1) * INT16(s2)
#define MULLU(t,s1,s2) t = (s1) * UINT16(s2)
#define MULH(t,s1,s2) t = (s1) * INT16((s2) >> 16)
#define MULHU(t,s1,s2) t = (s1) * UINT16((s2) >> 16)
#define MULHS(t,s1,s2) t = ((s1) * UINT16((s2) >> 16)) << 16
#define MULLL(t,s1,s2) t = INT16(s1) * INT16(s2)
#define MULLLU(t,s1,s2) t = UINT16(s1) * UINT16(s2)
#define MULLH(t,s1,s2) t = INT16(s1) * INT16((s2) >> 16)
#define MULLHU(t,s1,s2) t = UINT16(s1) * UINT16((s2) >> 16)
#define MULHH(t,s1,s2) t = INT16((s1) >> 16) * INT16((s2) >> 16)
#define MULHHU(t,s1,s2) t = UINT16((s1) >> 16) * UINT16((s2) >> 16)

I'm guessing, from what I've seen, I may just need to check in my Pats
whether a zext or sext has been used on the value to be operated on..?

Thanks,
Sam

James Molloy-3 wrote:

The Mips backend in its current form never emits instructions that
derive from ArithOverflowR or ArithOverflowI (instructions that do not
end with "u", such as ADD or SUB). These instructions were probably
added just for completeness and removing them will not do any harm.