Legalized selection DAG differs for the same code and flags

Hello, LLVM Devs.

I’m compiling following code using my own backend:

int foo() {
char arr[4];
arr[0] = 0xAA;
arr[1] = 0xBB;
arr[2] = 0xCC;
arr[3] = 0xDD;
return (int)&arr[0];
}

The memory operation in “return” statement ends up transformed into 4-byte load in the initial DAG:

load<(dereferenceable load 4 from %ir.7, align 1, addrspace 1)> t31, FrameIndex:i32<0>, …

However, at the “Legalized selection DAG” stage things go differently, depending on the OS I’m running. On Windows this load stay in its previous form, but on FreeBSD this load gets turned into 1-byte loads for some reason:

load<(dereferenceable load 1 from %ir.7, addrspace 1)

load<(dereferenceable load 1 from %ir.7 + 1, addrspace 1)

Can anyone give me a hint why this happens? Optimization level is the same and FastISel is used in both cases.

Thanks in advance.

In target-independent legalization, whether an unaligned load is considered “legal” is controlled by the target’s implementation of “allowsMisalignedMemoryAccesses()”. If it returns false, the load is not legal, and is therefore split into smaller, legal loads.

Hopefully that’s enough to point you on the right track.

-Eli

In target-independent legalization, whether an unaligned load is considered “legal” is controlled by the target’s implementation of “allowsMisalignedMemoryAccesses()”. If it returns false, the load is not legal, and is therefore split into smaller, legal loads.

Hopefully that’s enough to point you on the right track.

Thank you for the pointer. I don’t know why is there a difference between FreeBSD and Windows, but overriding this function in my backend makes the behavior identical in both cases.