[Proposal][RFC] Epilog loop vectorization

Hi,

This is a proposal about epilog loop vectorization.

Currently Loop Vectorizer inserts an epilogue loop for handling loops that don’t have known iteration counts.

The Loop Vectorizer supports loops with an unknown trip count, unknown trip count may not be a multiple of the vector width, and the vectorizer has to execute the last few iterations as scalar code. It keeps a scalar copy of the loop for the remaining iterations.

Loop with the large width has a high possibility of executing many scalar iterations.

i.e. i8 data type with 256bits target register can vectorize with vector width 32, with that maximum trip count possibility for scalar(epilog) loop is 31, which is significant & worth vectorizing.

Large vector factor has following challenges:

  1. Possibility of remainder iteration is substantial.

  2. Actual trip count at runtime is substantial but not meeting minimum trip count to execute vector loop.

These challenges can be addressed by mask instructions, but these instructions are limited and may not be available to all targets.

By epilog vectorization our aim to vectorize epilog loop where original loop is vectorized with large vector factor and has a high possibility of executing scalar iterations.

This require following changes:

  1. Costing: Preserve all profitable vector factor.

  2. Transform: Create an additional vector loop with next profitable vector factor.

Please refer attached file (BlockLayout.png) for the details about transformed block layout.

Patch is available at: https://reviews.llvm.org/D30247

Regards,

Ashutosh

Hi Ashutosh,

Hi,

This is a proposal about epilog loop vectorization.

Currently Loop Vectorizer inserts an epilogue loop for handling loops that don’t have known iteration counts.

The Loop Vectorizer supports loops with an unknown trip count, unknown trip count may not be a multiple of the vector width, and the vectorizer has to execute the last few iterations as scalar code. It keeps a scalar copy of the loop for the remaining iterations.

Loop with the large width has a high possibility of executing many scalar iterations.
i.e. i8 data type with 256bits target register can vectorize with vector width 32, with that maximum trip count possibility for scalar(epilog) loop is 31, which is significant & worth vectorizing.

Large vector factor has following challenges:
1) Possibility of remainder iteration is substantial.
2) Actual trip count at runtime is substantial but not meeting minimum trip count to execute vector loop.

These challenges can be addressed by mask instructions, but these instructions are limited and may not be available to all targets.

By epilog vectorization our aim to vectorize epilog loop where original loop is vectorized with large vector factor and has a high possibility of executing scalar iterations.

This require following changes:
1) Costing: Preserve all profitable vector factor.
2) Transform: Create an additional vector loop with next profitable vector factor.

Is this something that you propose to be on by default for wide VPU architectures without masking support? I.e. how widely is this applicable? If not then perhaps a better strategy would be to just annotate the remainder loop with some metadata to limit the vectorization factor and just rerun the vectorizer.

Adam

Hi,

Looks interesting!

I’m wondering about the extra cost in code-size and how much it could outweigh the benefit?

What data can you provide on benchmarks (code size and performance)?

Thanks,

Why would this solution (annotating the remainder loop to limit vectorization and rerunning the vectorization process) not be preferred regardless? One issue that might be relevant here are runtime aliasing checks, which are probably going to be redundant, or partially redundant, between the different vectorized versions. Will we be able to do any necessary cleanup after the fact? Moreover, we often don’t hoist (unswitch) these checks out of inner loops (perhaps because they’re inside the trip-count checks?); I wonder if the proposed block structure will make this situation better or worse (or have no overall effect). Thanks again, Hal

+1 for “just rerun the vectorizer” on the scalar remainder loop, as the proposed decision process is broken down into “first determine best VF for main loop, then determine best next EpilogVF for remainder loop” anyhow:

const LoopVectorizationCostModel::VectorizationFactor EpilogVF =

CM.identifyNextProfitableVF(VF.Width);

Raising some aspects:

o The unroll factor may also affect the best EpilogVF. For instance, if UF=1 then EpilogVF < VF, as the patch currently enforces. But if UF is larger the next profitable EpilogVF could be equal to VF.

o The scalar loop serves two purposes, as determined by its two pre-headers: either as a default in case runtime dependence checks fail, or as a remainder loop in which case it is known to be vectorizable with trip count less-than VFUF (or equal-to it). Would be good to keep this in mind when rerunning.

() Note that if original loop requiresScalarEpilogue(), the trip count of the remainder loop may be equal to VFUF, and/or the vectorized remainder loop may too require a scalar epilogue.

o It should be interesting to see how a single masked iteration that uses the original VF, say for UF=1, works. LV should hopefully already support most of what’s needed.

o The original Trip Count clearly affects the profitability of vectorizing the remainder loop. Would be good to leverage any information that can be derived about TC either statically or based on profiling, when determining EpilogVF. Potential speedups and overheads/slowdowns could possibly be demonstrated using extreme cases; what would the best and worst cases be? Perhaps TinyTripCountVectorThreshold is also affected?

o Finally, VPlan is currently modeling how to vectorize the loop body according to the potentially profitable VF’s. It’s modelling could be used to generate vector code for both body and remainder, and to consider their combined, overall cost.

Ayal.

Thanks for looking into this.

  1. Issues with re running vectorizer:

Vectorizer might generate redundant alias checks while vectorizing epilog loop.

Redundant alias checks are expensive, we like to reuse the results of already computed alias checks.

With metadata we can limit the width of epilog loop, but not sure about reusing alias check result.

Any thoughts on rerunning vectorizer with reusing the alias check result ?

  1. Best & worst case for epilog vectorization.

Epilog vectorization incurs additional cost of checks which decides to execute either epilog vector loop or scalar loop.

These checks are:

a) Min trip count check for epilog vector loop.

b) Alias result check (only if alias check is generated for first vector version)

Where the epilog vector trip count is large epilog vectorization with profitable width likely to give gain. Worst case probably after these additional checks executing the scalar loop.

  1. Benchmarking & results:

We have observed 4% improvement in one of our internal benchmark.

Tried CPU2006 and didn’t found any degrade & improvements.

On code size increase I have not yet verified yet will check and get back.

  1. What should be the minimum first vector loop width to enable epilog vectorization.

This count should not be small as it may degrade the performance, with my limited tests I have observed 16 is a point it shows gains with one of our internal benchmark. This require more experiments & testing to decide what should be the minimum width.

  1. Unrolling issues:

As Ayal mentioned with large unroll factor the next profitable EpilogVF could be equal to VF.

With the same reason the current patch enforces UF=1, as unrolling can minimize the possibility of executing epilog vector loop.

Example to understand the new layout:

void foo (char *A, char *B, char *C, int len) {

int i = 0;

for (i=0 ; i< len; i++)

A[i] = B[i] + C[i];

}

This loop get vectorize with width 32 and epilog loop get vectorized with width 8.

a) Please check attached IR(test.ll)

b) To understand how alias check result got used, check temporary “acr.val” usage.

Regards,

Ashutosh

test.ll (12.2 KB)

One way of looking at this is: Reusing the alias-check result is really just a conditional propagation problem; if we don’t already have an optimization that can combine these after the fact, then we should. -Hal

+Danny

Isn’t Extended SSA supposed to help with this?

I see this as a problem that needs to be solved regardless when loop-versioning is performed by multiple passes [1]. We need to be able to come up with the minimum sets of checks:

  • remove redundant checks in favor of identical dominating checks
  • share (i.e. factor out) identical checks required by consecutive loops

Ashutosh, I suggest that you try rerunning the vectorizer on the relevant testcases and file PRs about the inefficiencies you encounter.

Adam

[1] http://lists.llvm.org/pipermail/llvm-dev/2015-May/085922.html

Yes, it will solve this with no issue already. GVN probably does already
too.

even if if you have

if (a == b)
if (a == c)
if (a == d)
if (a == e)
if (a == g)

and we can prove a ... g equivalent, newgvn will eliminate them all and
set all the branches true.

If you need a simpler clean up pass, we could run it on sub-graphs.
The only thing you'd have to do is write some code to set "live on entry"
subgraph variables in their own congruence classes.
We already do this for incoming arguments.

Otherwise, it's trivial to make it only walk things in the subgraph.

Yes we probably don’t want to run a full GVN after the “loop-scheduling” passes.

I guess the pipeline to experiment with for now is opt -loop-vectorize -loop-vectorize -newgvn.

Adam

Right, and if you guarantee the conditions involve scalars (IE the pointer
is not from a load ), it can be made evenfaster by turning off the memory
handling (IE not building memoryssa)..

NewGVN will still give correct answers if you value number any instruction
you like as "unknown expression".
I have a patch to add a debug counter that does just that.

FWIW, we could, just without the memory-dependence analysis enabled (i.e. set the NoLoads constructor parameter to true). GVN is pretty fast in that mode. -Hal

There’s another issue with re-running the vectorizer (which I support, btw - I’m just saying there are more problems to solve on the way :slight_smile: )

Historically, we haven’t even tried to evaluate the cost of the “constant” (not per-iteration) vectorization overhead - things like alias checks. Instead, we have hard bounds - we won’t perform alias checks that are “too expensive”, and, more importantly, we don’t even try to vectorize loops with known low iteration counts. The bound right now is 16, IIRC. That means we don’t have a good way to evaluate whether vectorizing a loop with a low iteration count is profitable or not.

This also makes me wary of the “we can clean up redundant alias checks later” approach. When trying to decide whether to vectorize by 4 a loop that has no more than 8 iterations (because we just vectorized by 8 and it’s the remainder loop), we really want to know if the alias checks we’re introducing are going to survive a not.

Michael

FWIW, we could, just without the memory-dependence analysis enabled (i.e.
set the NoLoads constructor parameter to true). GVN is pretty fast in that
mode.

Yeah, same. We can definitely make it work to clean up the redundant checks
cheaply.

FWIW, we could, just without the memory-dependence analysis enabled (i.e. set the NoLoads constructor parameter to true). GVN is pretty fast in that mode.

OK. Another data point is that I’ve seen cases in the past where the alias checks required for the loop passes could enable GVN to remove redundant loads/stores. Currently we can only pick these up with LTO when GVN is rerun.

Adam

This is just GVN brokenness, newgvn should not have this problem.
If it does, i'd love to see it.

(I'm working on the last few parts of turning it on by default, but it
requires a new getModRefInfo interface to be able to get the last few
testcases)

We should really improve this as well. It occurs to me that, if SCEV’s known-predicate logic were smart enough, it would seem practical to not introduce redundant checks in the first place (although it would imply some gymnastics when examining the control flow around the loop and then restructuring things when we generate the code for the loop). -Hal

I thought that the problem is that we just don’t run GVN after that point in the pipeline. -Hal

I thought that the problem is that we just don’t run GVN after that point in the pipeline.

Yeah, that is the problem but I think Danny misunderstood what I was trying to say.

This was a datapoint to possibly rerun GVN with memory-awareness.

"Another data point is that I’ve seen cases in the past where the alias
checks required for the loop passes could enable GVN to remove redundant
loads/stores."

Why would GVN suddenly be able to remove redundant loads and stores due to
insertion of alias checks?

It should have been able to remove them regardless.