Disable integer promotion (Dilan Manatunga via cfe-dev)

Instead of suppressing the integer promotion rules which are part of the ISO C/C++ Standards, we wrote a new pass that analyses the IR to see if the input values and output value were of an integer type that was narrower than the promoted types used in the IR, and if we could prove that the outcome would be identical if the type was unpromoted, then we reduced the IR to use the narrower form.

In our case the motive was to enhance vectorisation because our vector ALU can work with 8-, 16- and 32-bit integers natively, and handling ‘vXi8’ vectors ended was actually being promoted to multiple ‘v4i32’ vectors requiring 4 times as many instructions as were necessary, or worse still, fully scalarized.

This pass was presented by my colleague Stephen Rogers in a “Lighting Talk” at the October 2015 LLVM Conference in San Jose and titled “Integer Vector Optimizations and “Usual Arithmetic Conversions””. I can’t find the paper or slides on the LLVM Meetings page, perhaps these are not archived for Lightning Talks (?), but as they are not large I have attached them here.

This approach allowed us to gain the optimisations that are possible with our architecture which supports 8-, 16- and 32-bit native integer computations (scalar and vector), while also respecting the ISO C and C++ Standards. I am a lot more nervous of a front-end switch for this, as it will lead to non-compliant programs, and in the presence of overloading and template-instantiation it could also lead to very different programs, and would recommend that we do not add a front-end switch which alters the semantics of the language in this way.

It is my intention to publish this pass if it is of general interest, and since it is target independent there are no particular blocking issue for me (Patents, IP, etc.) to doing so. I do have to catch-up on the HEAD revision to ensure that it still works correctly, but it was working perfectly at SVN #262824 and it will be a month before I have enough time to catch up on the HEAD revision as we are busy with a product release that takes precedence.

All the best,

MartinO

Integer Vector Optimizations and UACs - Paper.pdf (81.6 KB)

Integer Vector Optimizations and UACs - Slides.pdf (96.4 KB)

Hi,

X86 has native support for i8 and i16. Aarch64 and ARM have native i8 and i16 vector operations that are lowered and analysed using truncateToMinimalBitwidths in LoopVectorize. Similarly for scalar code on x86 truncation is done in instcombine.

Why do you need to reinvent this?

Cheers,

James

Hi James and thanks for pointing out the existence of this transformation, we were quite unaware of it.

As it happens, I am highly allergic to re-invention and avoid doing so whenever possible; the only reason an already overburdened team of 2 developers will re-invent is because they are unaware of an existing solution which is not difficult given the scope and complexity of LLVM.

So far as I can tell, ‘truncateToMinimalBitwidths’ is always enabled, so it is not a target specific selection and our target should automatically reap the rewards of this optimisation pass. I certainly cannot find a switch to enable or disable it. But in fact we are not seeing anywhere near the benefits we would expect.

void InnerLoopVectorizer::truncateToMinimalBitwidths() {

// For every instruction I in MinBWs, truncate the operands, create a

// truncated version of I and reextend its result. InstCombine runs

// later and will remove any ext/trunc pairs.

This appears to only run on inner-loops, and it appear to insert narrowings/truncations and subsequent widenings/extendings into the IR chains.

The DataLayout for our target includes “-n8:16:32”, so it should see the benefits of optimisations for multiple native integer support. We also provide both 32-bit SIMD and 128-bit SIMD native support.

The pass that we wrote is quite different. It is run as a machine pass prior to loop-unrolling and vectorisation, and instead of pre-truncating and post-extending IR chains, it removes the existing pre-extending and post-truncating that brackets a sequence of IR operations if it can prove that the outcome is the same. The results are actually very good and match what our expectations are from such a transformation, which makes me wonder “why does ‘truncateToMinimalBitwidths’ not already produce comparable results?”.

Our observations are that with the new pass, a significant majority of vectorised code showed some improvement, with results as high as 40X faster than without. Of the small number of tests that regressed in performance, adding a ‘#pragma clang unroll_count(N)’ eliminated the loss. This could probably be eliminate too by better tuning of the cost-models.

The re-invention is inadvertent, but in any event our new pass appears to provide considerable additional performance improvements that are not currently happening with the stock LLVM transformations.

I will have to contrive some tests to see why ‘truncateToMinimalBitwidths’ is not already doing this, and if there is something that we have done wrong in our target that is breaking it, I will happily revert to an existing solution.

MartinO

Hi James and thanks for pointing out the existence of this transformation, we were quite unaware of it.

As it happens, I am highly allergic to re-invention and avoid doing so whenever possible; the only reason an already overburdened team of 2 developers will re-invent is because they are unaware of an existing solution which is not difficult given the scope and complexity of LLVM.

So far as I can tell, ‘truncateToMinimalBitwidths’ is always enabled, so it is not a target specific selection and our target should automatically reap the rewards of this optimisation pass. I certainly cannot find a switch to enable or disable it. But in fact we are not seeing anywhere near the benefits we would expect.

void InnerLoopVectorizer::truncateToMinimalBitwidths() {

// For every instruction I in MinBWs, truncate the operands, create a

// truncated version of I and reextend its result. InstCombine runs

// later and will remove any ext/trunc pairs.

This appears to only run on inner-loops, and it appear to insert narrowings/truncations and subsequent widenings/extendings into the IR chains.

The DataLayout for our target includes “-n8:16:32”, so it should see the benefits of optimisations for multiple native integer support. We also provide both 32-bit SIMD and 128-bit SIMD native support.

The pass that we wrote is quite different. It is run as a machine pass prior to loop-unrolling and vectorisation, and instead of pre-truncating and post-extending IR chains, it removes the existing pre-extending and post-truncating that brackets a sequence of IR operations if it can prove that the outcome is the same. The results are actually very good and match what our expectations are from such a transformation, which makes me wonder “why does ‘truncateToMinimalBitwidths’ not already produce comparable results?”.

Our observations are that with the new pass, a significant majority of vectorised code showed some improvement, with results as high as 40X faster than without. Of the small number of tests that regressed in performance, adding a ‘#pragma clang unroll_count(N)’ eliminated the loss. This could probably be eliminate too by better tuning of the cost-models.

The re-invention is inadvertent, but in any event our new pass appears to provide considerable additional performance improvements that are not currently happening with the stock LLVM transformations.

I will have to contrive some tests to see why ‘truncateToMinimalBitwidths’ is not already doing this, and if there is something that we have done wrong in our target that is breaking it, I will happily revert to an existing solution.

MartinO

Hi,

I’m on vacation at the moment with only a phone to reply on but…

TruncateToMinimalBitwidths is, as you point out, only for vectorisation. There are three cases:

  1. I8 and i16 are never supported on the target.
  2. I8 and i16 are supported for vectors but not for scalars (ARM)
  3. I8 and i16 are supported for scalars and vectors (x86)

(3) is handled by simplifyDemandedBits in the instruction combiner, so it will work on scalars and vectors but will only ever truncate promotions if the smaller integer operation is valid on the target.

(2) is handled by truncateToMinimalBitwidths where we need to, as part of the vectorisation profitability analysis, determine what the loop will look like after vectorisation. We insert trunc and ext nodes simply as a shortcut - we could elide the promotions as you do, but there are corner cases that make it a bit awkward so we just add more casts and let a cleanup pass (instcombine) remove them intelligently.

I hope this answers your queries a bit more? Both of these should be kicking in already if your target advertises i8 as being legal for scalars or vectors, so I would check your target transform info to ensure the legality hook is returning true when it should.

Cheers

James

Hi,

Thanks everyone for their suggestions!

@David
I tried your method first, but it lead to compilation errors for invalid bitcasts for some of the custom intrinsics I had added to my backend. I wasn’t sure why and didn’t investigate it too much.

@Norman
Thanks for the paper and slides you sent. They were useful in giving me some ideas on how to solve it.

@James
Thanks for giving me architectures which already dealt with the issue. I had checked NVPTX and ARM actually for scalars, and when I saw they didn’t do anything I decided to ask the question. My fault for not checking if X86 handled the issue.

Again, thanks everyone.

-Dilan

Hi Norman,

The main impact of the implicit promotions that we observed was that vectorisation was taking place using vectors of a wider type than was necessary and hence used more of them. For example:

char* _restrict dst;

const char* __restrict src1;

const char* __restrict src2;

for (int i = 0; i < 256; ++i)

dst[i] = src1[i] + src2[i];

would vectorise using ‘v4i32’ vectors instead of the ‘v16i8’ vectors, resulting in 4 times as many instructions and increased vector register pressure. And of course, the use of unnecessary ‘extend’ and ‘truncate’ operations.

MartinO

No rush on an answer James I won’t get a chance to follow up on this for a couple of weeks anyway, so enjoy your vacation.

I think that we have the TTI correct, but there are other problems. For instance ‘getNumberRegisters(true)’ is awkward because we have both 32-bit SIMD and 128-bit SIMD registers. Similarly ‘getRegisterBitWidth(true)’. We just return ‘32’ and ‘128’ respectively, because the TTI interface does not allow us to discriminate for 32-bit versus 128-bit vectors.

The other hooks are for costs, and for the most part they look reasonable, though I delegate to the ‘BasicTTIImpl’ implementation for the interleaved memory cost because I have not yet measured the impact of changing this. Is there any particular cost hook that is more likely than another to influence ‘truncateToMinimalBitwidths’?

Regarding:

Both of these should be kicking in already if your target advertises i8 as being legal for scalars or vectors

I am only aware of the DataLayout ‘-n8:16:32’ for this, is there an equivalent for vectors? I also have ‘-v16:16-v32:32-v128:64’, but these only deal with the aggregate size of the vector and not it’s element type. Am I missing a hook in the TTI or STI perhaps?

Thanks,

MartinO

Martin,

I am interested to know as well. Perhaps it is just that your target’s TargetLowering constructor has a call to addRegisterClass() for that value type, thereby making it a legal type. Looking through the code, it appears this is the mechanism for TTI to enquire about the legality of a type.

Nemanja