What should a truncating store do?

OK, I’m clear on scalars. Data races are thankfully OK in this context.

Densely packing vectors sounds efficient and is clear in the case where lanes * width is a multiple of 8 bits. I don’t think I understand how it works in other cases.

If we could take store <4 x i8> truncating to <4 x i7> as an example. This can be converted into four scalar i8 → i7 stores with corresponding increments to the address, in which case the final layout in memory is 0b01111111011111110111111101111111. Or it can be written as a packed vector which I think would resemble 0b00001111111111111111111111111111.

This would mean the memory layout changes depending on how/whether the legaliser breaks large vectors down into smaller types. Is this the case? For example, <4xi32> => <4 x i31> converts to two <2 x i32> => <2 x i31> stores on a target with <2 x i32> legal but would not be split if <4 x i32> were declared legal.

Vectors get complicated; I don't recall all the details of what the code generator currently does/is supposed to do. See also https://bugs.llvm.org/show_bug.cgi?id=31265 .

-Eli

They are starting to look complicated. The patch linked is interesting, perhaps v1 vectors are special cased. It shouldn’t be too onerous to work out what one or two in tree back ends do by experimentation.

Thanks again, it’s great to have context beyond the source.

(Not sure if this exactly maps to “truncating store”, but I think it at least touches some of the subjects discussed in this thread)

Our out-of-tree-target need several patches to get things working correctly for us.

We have introduced i24 and i40 types in ValueTypes/MachineValueTypes (in addition to the normal pow-of-2 types). And we have vectors of those (v2i40, v4i40).

And the byte size in our target is 16 bits.

When storing an i40 we need to store it as three 16-bit bytes, i.e. 48 bits.

When storing a v4i40 vector it will be stored as 4x48 bits.

One thing that we have had to patch is the getStoreSize() method in ValueTypes/MachineValueTypes where we assume that vectors are bitpacked when the element size is smaller than the byte size (“BitsPerByte”):

/// Return the number of bytes overwritten by a store of the specified value

/// type.

unsigned getStoreSize() const {

  • return (getSizeInBits() + 7) / 8;
  • // We assume that vectors with elements smaller than the byte size are

  • // bitpacked. And that elements larger than the byte size should be padded

  • // (e.g. i40 type for Phoenix is stored using 3 bytes (48 bits)).

  • bool PadElementsToByteSize =

  • isVector() && getScalarSizeInBits() >= BitsPerByte;

  • if (PadElementsToByteSize)

  • return getVectorNumElements() * getScalarType().getStoreSize();

  • return (getSizeInBits() + (BitsPerByte-1)) / BitsPerByte;

}

The patch seems to work for in-tree-target tests as well as our out-of-tree target.

If it is a correct assumption for all targets is beyond my knowledge. Maybe only i1 vectors should be bitpacked?

Anyway, I think the bitpacked cases is very special (we do not use it for our target…).

AFAIK bitcast is defined as writing to memory followed by a load using a different type. And I think that doing several scalar operations should give the same result as when using vectors. So bitcast of bitpacked vectors should probably be avoided?

This also reminded me of the following test case that is in trunk: test/CodeGen/X86/pr20011.ll

%destTy = type { i2, i2 }

define void @crash(i64 %x0, i64 %y0, %destTy* nocapture %dest) nounwind {

; X64-LABEL: crash:

; X64: # BB#0:

; X64-NEXT: andl $3, %esi

; X64-NEXT: movb %sil, (%rdx)

; X64-NEXT: andl $3, %edi

; X64-NEXT: movb %dil, (%rdx)

; X64-NEXT: retq

%x1 = trunc i64 %x0 to i2

%y1 = trunc i64 %y0 to i2

%1 = bitcast %destTy* %dest to <2 x i2>*

%2 = insertelement <2 x i2> undef, i2 %x1, i32 0

%3 = insertelement <2 x i2> %2, i2 %y1, i32 1

store <2 x i2> %3, <2 x i2>* %1, align 1

ret void

}

As you can see by the “X64” checks the behavior is quite weird.

Both movb instructions writes to the same address. So the result of the store <2 x i2> will be the same as when only storing one of the elements.

Is this really expected?

We have emailed Simon Pilgrim who added the test case to show that we no longer crash on this test case (see https://bugs.llvm.org/show_bug.cgi?id=20011). But even if the compiler doesn’t crash, the behavior seems wrong to me.

Regards,

Björn Pettersson

Yes, store+load is the right definition of bitcast. And in fact, the backend will lower a bitcast to a store+load to a stack temporary in cases where there isn’t some other lowering specified. The end result is probably going to be pretty inefficient unless your target has a special instruction to handle it (x86 has pmovmskb for i1 vector bitcasts, but otherwise you probably end up with some terrible lowering involving a lot of shifts).

So what is the correct behavior of the store <2 x i2>. Storing two bytes with a zext:ed i2 in each, or a bit packed vector?

I can't remember any documentation mentioning that vectors are bit packed. But if LLVM is supposed to bit pack vectors, should we do it for any element size, or only when element size is less than the byte size, or only for i1 vectors?

Maybe bit packing should be optional (target specific)?

Btw, the IR/ISD note for a <2 x i2> store indicates that it is an one byte (ST1) store, due to the getStoreSize() methods saying one byte for this kind of vector. So alias analysis etc thinks that only one byte is written.

So what is the correct behavior of the store <2 x i2>. Storing two bytes with a zext:ed i2 in each, or a bit packed vector?

I can't remember any documentation mentioning that vectors are bit packed. But if LLVM is supposed to bit pack vectors, should we do it for any element size, or only when element size is less than the byte size, or only for i1 vectors?

For lack of any formal documentation, the most canonical source of information about the size of a type is DataLayout. And currently DataLayout::getTypeSizeInBits() assumes vectors are bit-packed.

Maybe bit packing should be optional (target specific)?

That's... maybe possible? It would break the current rule that whether a bitcast is legal is independent of the target, and I don't see a compelling reason. (If you're trying to comply with some external ABI, you can always explicitly extend a value before you store it.)

-Eli

We still struggle with this in many cases - llvm/test/CodeGen/X86/vector-compare-results.ll has some pretty shocking cases that haven’t been addressed.

Weren’t the Embescom chaps working on better support for targets with base types other than 8 bits?

This also reminded me of the following test case that is in trunk: test/CodeGen/X86/pr20011.ll

%destTy = type { i2, i2 }
define void @crash(i64 %x0, i64 %y0, %destTy* nocapture %dest) nounwind {
; X64-LABEL: crash:
; X64: # BB#0:
; X64-NEXT: andl $3, %esi
; X64-NEXT: movb %sil, (%rdx)
; X64-NEXT: andl $3, %edi
; X64-NEXT: movb %dil, (%rdx)
; X64-NEXT: retq
%x1 = trunc i64 %x0 to i2
%y1 = trunc i64 %y0 to i2
%1 = bitcast %destTy* %dest to <2 x i2>*
%2 = insertelement <2 x i2> undef, i2 %x1, i32 0
%3 = insertelement <2 x i2> %2, i2 %y1, i32 1
store <2 x i2> %3, <2 x i2>* %1, align 1
ret void
}

As you can see by the “X64” checks the behavior is quite weird.
Both movb instructions writes to the same address. So the result of the store <2 x i2> will be the same as when only storing one of the elements.
Is this really expected?

We have emailed Simon Pilgrim who added the test case to show that we no longer crash on this test case (see https://bugs.llvm.org/show_bug.cgi?id=20011). But even if the compiler doesn’t crash, the behavior seems wrong to me.

Yes, the behavior here is wrong. DAGTypeLegalizer::SplitVecOp_STORE/DAGTypeLegalizer::SplitVecRes_LOAD/etc. assume the element size is a multiple of 8. I’m sure this has been discussed before, but I guess nobody ever wrote a patch to fix it…?

Sorry I might have missed that email. I ended up creating PR31265 as a meta because there are far too many cases like this, where we don’t correctly pack illegal vector types, PR1784 goes into more details on this. My particular interest was in bool vectors, especially after AVX512 went with the bitpacked data representations for mask registers.