SLP regression on SystemZ

Hi,

I have come across a major regression resulting after SLP vectorization (+18% on SystemZ, just for enabling SLP). This all relates to one particular very hot loop.

Scalar code:
%conv252 = zext i16 %110 to i64
%conv254 = zext i16 %111 to i64
%sub255 = sub nsw i64 %conv252, %conv254
… repeated

SLP output:
%101 = zext <16 x i16> %100 to <16 x i64>
%104 = zext <16 x i16> %103 to <16 x i64>
%105 = sub nsw <16 x i64> %101, %104
%106 = trunc <16 x i64> %105 to <16 x i32>
for each element e 0:15
%107 = extractelement <16 x i32> %106, i32 e
%108 = sext i32 %107 to i64

The vectorized code should in this case only have to be

%101 = zext <16 x i16> %100 to <16 x i64>
%104 = zext <16 x i16> %103 to <16 x i64>
%105 = sub nsw <16 x i64> %101, %104
for each element e 0:15
%107 = extractelement <16 x i64> %105, i32 e

,but this does not get handled so for all the 16 elements, extracts and extends are done.

I see that there is a special function in SLP vectorizer that does this truncation and extract+extend whenever possible. Is this the place to fix this?

Or would it be better to rely on InstCombiner?

Is this truncation done by SLP with the assumption that it is free to extend an extracted element? On SystemZ, this is not true.

/Jonas

Hi Jonas,

The vectorizers do attempt to type-shrink elements if possible to pack more data into vectors. It looks like that’s what’s happening here. This transformation is cost-modeled, but there are assumptions made about what InstCombine will be able to clean up. Would you mind filing a bug with at test case that we can take a look at?

– Matt

Hi Matt,

thanks for taking a look, please see .

/Jonas