[RFC] Allow loop vectorizer to choose vector widths that generate illegal types


Currently the loop vectorizer will, by default, not consider vectorization factors that would make it generate types that do not fit into the target platform’s vector registers. That is, if the widest scalar type in the scalar loop is i64, and the platform’s largest vector register is 256-bit wide, we will not consider a VF above 4.

We have a command line option (-mllvm -vectorizer-maximize-bandwidth), that will choose VFs for consideration based on the narrowest scalar type instead of the widest one, but I don’t believe it has been widely tested. If anyone has had an opportunity to play around with it, I’d love to hear about the results.

What I’d like to do is:

Step 1: Make -vectorizer-maximize-bandwidth the default. This should improve the performance of loops that contain mixed-width types.
Step 2: Remove the artificial width limitation altogether, and base the vectorization factor decision purely on the cost model. This should allow us to get rid of the interleaving code in the loop vectorizer, and get interleaving for “free” from the legalizer instead.

There are two potential road-blocks I see - the cost-model, and the legalizer. To make this work, we need to:
a) Model the cost of operations on illegal types better. Right now, what we get is sometimes completely ridiculous (e.g. see http://reviews.llvm.org/D21251).
b) Make sure the cost model actually stops us when the VF becomes too large. This is mostly a question of correctly estimating the register pressure. In theory, that should not be a issue - we already rely on this estimate to choose the interleaving factor, so using the same logic to upper-bound the VF directly shouldn’t make things worse.
c) Ensure the legalizer is up to the task of emitting good code for overly wide vectors. I’ve talked about this with Chandler, and his opinion (Chandler, please correct me if I’m wrong) is that on x86, the legalizer is likely to be able to handle this. This may not be true for other platforms. So, I’d like to try to make this the default on a platform-by-platform basis, starting with x86.

What do you think? Does this seem like a step in the right direction? Anything important I’m missing?


I know we already talked about this and so I’m more interested in others’ thoughts, but just to explicitly say it, this LGTM. I particularly think that using extra-wide vectors to model widening-for-interleaving is a much cleaner model in the IR.

Also, at least one other user of the IR’s vector capabilities is doing precisely this: Halide. I’m pretty happy about seeing convergence here and both Halide and the loop vectorizer generating more similar patterns.

Michael, thanks for driving this! My only comment is that before the final flip, we need to engage the community for more extensive performance testing on various architectures.


Of course.

If anyone wants to volunteer to test this on their workloads once the cost model is less broken (so we actually try to use higher VFs instead of rejecting them on cost grounds), that would be great.


Its not clear how you would get ‘interleaving for free’.

Sorry, you’re right, that really wasn’t clear.
When I wrote “for free”, I meant “without having code in the vectorizer dealing specifically with interleaving”.

Consider a simple loop, like:

void hot(int *a, int *b) {
#pragma clang loop vectorize_width(4) interleave_count(2)
#pragma nounroll
for (int i = 0; i < 1000; i++) {
a[i] += b[i];
return ;

We’ll get a vector loop with 4-element vectors, that, when compiling for SSE, gets lowered to:
.LBB0_3: # %vector.body

=>This Inner Loop Header: Depth=1

movdqu -16(%rsi,%rax,4), %xmm0
movdqu (%rsi,%rax,4), %xmm1
movdqu -16(%rdi,%rax,4), %xmm2
movdqu (%rdi,%rax,4), %xmm3
paddd %xmm0, %xmm2
paddd %xmm1, %xmm3
movdqu %xmm2, -16(%rdi,%rax,4)
movdqu %xmm3, (%rdi,%rax,4)
addq $8, %rax
cmpq $1004, %rax # imm = 0x3EC
jne .LBB0_3

If we instead have

#pragma clang loop vectorize_width(8) interleave_count(1)

We’ll get an 8-wide IR vector loop, but end up with almost the same lowering:

.LBB0_3: # %vector.body

=>This Inner Loop Header: Depth=1

movdqu 16(%rsi,%rax,4), %xmm0
movdqu (%rsi,%rax,4), %xmm1
movdqu 16(%rdi,%rax,4), %xmm2
movdqu (%rdi,%rax,4), %xmm3
paddd %xmm1, %xmm3
paddd %xmm0, %xmm2
movdqu %xmm2, 16(%rdi,%rax,4)
movdqu %xmm3, (%rdi,%rax,4)
addq $8, %rax
cmpq $1000, %rax # imm = 0x3E8
jne .LBB0_3

Legalization splits each 8-wide operation into two 4-wide operations, achieving almost the same result as vectorizing by a factor of 4 and unrolling by 2.

The question is whether the legalizer is actually up to doing this well in general.

Thx Michael.

Our architecture has 2 different sizes for vector registers with separate register files and functional units for each, and the existing cost model already makes optimisation for this quite difficult. Ideally the loop-vectoriser would be able to vectorise for vectorisable code in the loop using both in parallel. At the moment the architectures that in the TRUNK for LLVM all use a single size for vector registers and a single register file for them, but I expect there are other out-of-tree targets that are using multiple vector register widths.

Removing the width limitation altogether I think would make optimisations for hybrid vector models such as ours less difficult, but it also means the cost model should be able to query for the vector width and expect to get a list instead of a single value as it does now. Querying for the number of vector registers should be a function of the vector type being examined.


Step 1: Make -vectorizer-maximize-bandwidth the default. This should improve
the performance of loops that contain mixed-width types.

Hi Michael,

Per target, after investigation, I think this is perfectly fine.

Step 2: Remove the artificial width limitation altogether, and base the
vectorization factor decision purely on the cost model. This should allow us
to get rid of the interleaving code in the loop vectorizer, and get
interleaving for "free" from the legalizer instead.

I'm slightly worried about this one, though.

The legalizer is a very large mess, with many unknown (or long
forgotten) inter-dependencies and intra-dependencies (with isel,
regalloc, back-end opt passes, etc), which were all mostly annealed
into working by heuristics and hack-fixing stuff. The multiple
attempts at re-writing the instruction selection is one demonstration
of that problem...

So, while I agree with Hal that this will put a good pressure into
improving the cost model (as well as the intra-dependencies), and
that's something very positive, I fear if the jump becomes to far,
we'll either break the world or not jump at all. For example,

I'm not saying we shouldn't do it, but if/when we do it, it would be
*very* beneficial to provide a multi-step migration path for future
targets to move in, not just a multi-step initial migration for the
primary target.

Another thing to consider is that the SLP vectorizer can use non-SIMD
FP co-processors (VFP on ARM), which have different costs than SIMD,
but may share the same decision path, especially if we move the
decision lower down into the legalizer.

Also, there are hidden costs between the different units in sharing
the registers or moving between, and that is not mapped into the
current cost model entirely (only via heuristics). This may not be a
problem for Intel, but it certainly will be for ARM/AArch64.

I had a plan 3 years ago to look into that, but never got around doing
it. Maybe it's about time I did... :slight_smile:

Finally, if you need pre-testing and benchmarking, let me know and I
can spare some time to help you. I'll be glad to be copied on the
reviews and will do my best to help.

All in all, I don't think we'll get anything for free on this change.
There will be a cost, and it will be different on different targets,
but it may very well be a cost worth taking. I don't know enough yet
to have an opinion.


Thanks, Renato!

How common is the "LoopVectorizer + FastISel + Performance is important"

Ah, I didn't mean that FastIsel would be a problem here.

FastIsel was an attempt to re-write the selection DAG but it wasn't
activated to all cores / ISAs and now it'll probably never be used for
Thumb on ARM (sorry, context).

I don't want this to happen with the vectorizer, and have two
completely different default behaviours that can span all the way down
the codegen level, or it would be a nightmare to implement, test and
benchmark anything.

Then we can move the per-platform defaults to either the "regular" or the
"harder" versions independently. Is this what you meant? If so, it makes
perfect sense to me.

Sort of, yes. But also in the sense of pushing the changes through the
back-end. Example:

Step1: enable larger sizes for types X, teach legalizeTypes to do that one.
Step2: enable it for type Y, change legalizer, fix coge gen bug B1
Step3: refactor selDAG for target T1 a bit, prepare the Greedy regalloc
Step4: enable T1's X, change legalizer

So, if we could selectively enable this for certain types on certain
targets (say, single-FP on ARM), then we'll be able to progress and
refactor as we go along, without the *need* to finish everything in
less than six months to make it into the next release.

I agree this is a problem, but it seems like it should be orthogonal to what
I'm suggesting. I probably don't understand the background well enough,

It should be more orthogonal than it is, really. The cost model has a
lot of heuristics about how we *think* the IR pattern X will lower
into the <Target> instructions Y, and some of those costs were fudged
into submission by trial and error.

As we start changing the costs, or worse, the cost model, we can
completely derail some problematic cases. If we're not looking for
those, we'd only realise months later, on release benchmarking... :frowning:

That wold be great! I'm going to start with X86, mostly because that's the
platform I'm most familiar with. But once it works on X86 (hopefully), I'll
definitely need help with other platforms, both in terms of the cost model
and benchmarking.

Makes sense.

If I have some time, early on, I want to try your x86 patches and
enable them for ARM, just to see the world crumble. :slight_smile: