Questions on LLVM vectorization diagnostics

Hi Dangeti, Ramakrishna, Adam, and Gerolf,

Yes this is an area that needs further improvement. We have some immediate plans to make these more useful. See the recent llvm-dev threads [1], [2].

It takes a lot of dedicated effort to make vectorization report easier to understand by ordinary programmers
(i.e., those who are not compiler writers). Having done that for ICC ourselves, we truly believe it was a good
investment of resource. There are areas where both expert and non-expert of vectorizer development
can equally contribute. That includes getting the source code location right and variable names (and memory
references) printed at the source level representation. If anyone has data on how good LLVM is on these
areas, we'd appreciate a pointer to such information. Otherwise, we'll study that when our development
effort hit that area, report back, and contribute for improvement.

In our analysis we never seen llvm trying to vectorize outer loops. Is this well known? Is outer loop vectorization implemented in LLVM
as in GCC? (http://dl.acm.org/citation.cfm?id=1454119) If not, is someone working on it?

I heard various people mention this but I am not sure whether actual work is already taking place.

We are currently working on introducing a next generation vectorizer design to LLVM, aiming to support OpenMP4.5 SIMD
(i.e., including outer loop vectorization). I hope to be able to send in an RFC on the high level design document to LLVM-DEV
next month. We are currently working on an RFC for "vectorizer's output" (IR, not diagnostic), to be discussed before the next
gen design. As part of this next gen work, we'll also be working on improving diagnostics. Stay tuned.

actual work is already taking place.

Yes, our hands are dirty with actual coding work to ensure that the high level design makes sense. :slight_smile:

Thanks,
Hideki Saito (hideki dot saito at intel dot com)
Technical Lead of Vectorizer Development
Intel Compiler and Languages

Has there been a follow up? I’m very interested in specific examples underlying the key design decisions. Specifically I expect that you have examples that have a x% speed-up with ICC vs clang because of XYZ in your design. Similar if you have examples for better diagnostics if probably makes sense to share them.

Thanks
Gerolf

Hi, Gerolf.

We've been a bit quiet for some time. After listening to feedback on the project internally and externally,
we decided to take a more generally accepted community development model ---- building up through
a collection of small incremental changes ---- than trying to make a big step forward. That change of course
took a bit of time, but we are getting close to the first NFC patch on which we hope to incrementally build
up new functionalities.

Within a few weeks, we plan to send in the first of the series of RFCs, which is soon to be followed by
the NFC patch for review as the first step. We are also making a submission for a talk about this project,
plus a submission for a BoF about vector masking, at 2016 LLVM DEV Meeting. I hope our submissions
will be accepted. Looking forward to have great discussions on the mailing list, patch review process,
and in person.

I’m very interested in specific examples underlying the key design decisions.

Since the two paragraphs above aren't too useful in answering your questions, let me talk about
one particular example: auto-vectorization of outer loops.

I do not know whether any readers here have noticed: ICC auto-vectorizer works inner to outer.
If the inner loop is auto-vectorized, outer loop is no longer a vectorization candidate. Currently,
it does not have an ability to compare the benefit of vectorizing outer loop and the benefit of
vectorizing inner loop(s) ---- people in academia, here's a paper opportunity. :slight_smile:
Often times, outer loop vectorization requires a massaging of inner loop control flow --- and ICC
Vectorizer does such massaging on its underlying IR level ---- just like many of you who have
implemented OpenMP SIMD, OpenCL and other explicit vector programming model. This is
okay when you know which loop to vectorize ahead of time. Not so nice if you are trying to
decide between inner loop vectorization or outer loop vectorization. As such, one of the key
design consideration was being able to "pseudo-massage inner loop control flow" w/o modifying
the underlying IR, until the cost model decides where to vectorize.

This, by itself, is a rather ambitious project, many people advised us to go many incremental small steps,
and we listened. That has led to the small NFC patch mentioned above.

I hope this revelation is interesting enough for many of you to stay tuned for our further development.
I probably spoke too much about ICC vectorizer internal. One of the future RFCs (it'll certainly take some
time to get to that point through many incremental steps) will be talking about the inner versus outer
auto-vectorization. We hope to get there sooner than later.

Now, I have one question. Suppose we'd like to split the vectorization decision as an Analysis pass and vectorization
transformation as a Transformation pass. Is it acceptable if an Analysis pass creates new Instructions and new BasicBlocks,
keep them unreachable from the underlying IR of the Function/Loop, and pass those to the Transformation pass as
part of Analysis's internal data? We've been operating under the assumption that such Analysis pass behavior is unacceptable.

Hi Saito,

First let me say, impressive work you guys are planning for the
vectoriser. Outer loop vectorisation is not an easy task, so feel free
to share your ideas early and often, as that would probably mean a lot
less work for you guys, too.

Regarding generation of dead code, I don't remember any pass doing
this (though I haven't looked at many). Most passes do some kind of
clean up at the end, and DCE ends up getting rid of spurious things
here and there, so you can't *rely* on it being there. It's even worse
than metadata, which is normally left alone *unless* needs to be
destroyed, dead code is purposely destroyed.

But analysis passes shouldn't be touching code in the first place. Of
course, creating additional dead code is not strictly changing code,
but this could be cause for code bloat, leaks, or making it worse for
other analysis. My personal view is that this is a bad move.

Please let us know if this is a generally acceptable way for an Analysis pass to work ---- this might make our development
move quicker. Why we'd want to do this? As mentioned above, we need to "pseudo-massage inner loop control flow"
before deciding where/whether to vectorize. Hope someone can give us a clear answer.

We have discussed the split of analysis vs transformation with Polly
years ago, and it was considered "a good idea". But that relied
exclusively on metadata.

So, first, the vectorisers and Polly would pass on the IR as an
analysis pass first, leaving a trail of width/unroll factors, loop
dependency trackers, recommended skew factors, etc. Then, the
transformation passes (Loop/SLP/Polly) would use that information and
transform the loop the best they can, and clean up the metadata,
leaving only a single "width=1", which means, "don't try to vectorise
any more". Clean ups as required, after the transformation pass.

The current loop vectoriser is split in three stages: validity, cost
and transformation. We only check the cost if we know of any valid
transformation, and we only transform if we know of any better cost
than width=1. Where the cost analysis would be, depends on how we
arrange Polly, Loop and SLP vectoriser and their analysis passes.
Conservatively, I'd leave the cost analysis with the transformation,
so we only do it once.

The outer loop proposal, then, suffers from the cost analysis not
being done at the same time as the validity analysis. It would also
complicate a lot to pass "more than one" types of possible
vectorisation techniques via the same metadata structure, which will
probably already be complex enough. This is the main reason why we
haven't split yet.

Given that scenario of split responsibility, I'm curious as to your
opinion on the matter of carrying (and sharing) metadata between
different vectorisation analysis passes and different transformation
types.

cheers,
--renato

Hi Hideki,

Thanks for the interesting writeup!

Now, I have one question. Suppose we'd like to split the vectorization decision as an Analysis pass and vectorization
transformation as a Transformation pass. Is it acceptable if an Analysis pass creates new Instructions and new BasicBlocks,
keep them unreachable from the underlying IR of the Function/Loop, and pass those to the Transformation pass as
part of Analysis's internal data? We've been operating under the assumption that such Analysis pass behavior is unacceptable.

Hi Saito,

First let me say, impressive work you guys are planning for the
vectoriser. Outer loop vectorisation is not an easy task, so feel free
to share your ideas early and often, as that would probably mean a lot
less work for you guys, too.

Regarding generation of dead code, I don't remember any pass doing
this (though I haven't looked at many). Most passes do some kind of
clean up at the end, and DCE ends up getting rid of spurious things
here and there, so you can't *rely* on it being there. It's even worse
than metadata, which is normally left alone *unless* needs to be
destroyed, dead code is purposely destroyed.

But analysis passes shouldn't be touching code in the first place. Of
course, creating additional dead code is not strictly changing code,
but this could be cause for code bloat, leaks, or making it worse for
other analysis. My personal view is that this is a bad move.

While I agree with Renato, it should be definitely worth mentioning LCSSA in this context. I still don’t know how we should call it: an analysis or a transformation. It sometimes can be viewed as an analysis meaning that a pass can ‘preserve’ it (i.e. the IR is still in LCSSA form after the pass). At the same time, LCSSA obviously can and does transform IR, but it does so by generating a ‘dead’ code - phi-nodes that can later be folded easily.

So, to answer your question - I think it is ok to do some massaging of the IR before your pass, and you could use LCSSA as an example of how it can be implemented. However, creating unreachable blocks sound a bit hacky - it looks like we’re just going to use IR as some shadow data-structure. If that’s the case, why not to use a shadow data-structure :slight_smile: ? ScalarEvolution might be an example of how this can be done - it creates a map from IR instructions to SCEV-objects.

Thanks,
Michael

Renato and Michael, thanks for your replies.

Renato > Outer loop vectorisation is not an easy task, so feel free
Renato > to share your ideas early and often, as that would probably mean a lot
Renato > less work for you guys, too.

Will definitely do.

Renato >But analysis passes shouldn't be touching code in the first place. Of course, creating
Renato >additional dead code is not strictly changing code, but this could be cause for code
Renato >bloat, leaks, or making it worse for other analysis. My personal view is that this is a bad move.

Michael >However, creating unreachable blocks sound a bit hacky - it looks like we’re just going to use IR
Michael >as some shadow data-structure.

Also got another person via private e-mail saying Analysis pass creating Instruction/BasicBlock "is generally
frowned upon".

I think this is showing enough people disliking the idea of an Analysis pass creating Instruction/BasicBlock and
use that to pass (as part of) Analysis info to the Transformation pass. That was our original assumption, and
it's good to know that our assumption has support (don't know how wide, but at least a good start).
Now, the next question is "how else to make something similar to happen".

Renato > We have discussed the split of analysis vs transformation with Polly years ago, and it was considered
Renato > "a good idea".

Same thinking here.

Renato > But that relied exclusively on metadata.
  Snip snip snip
Renato > Given that scenario of split responsibility, I'm curious as to your opinion on the matter of carrying (and
Renato > sharing) metadata between different vectorisation analysis passes and different transformation types.

Michael > why not to use a shadow data-structure :slight_smile: ? ScalarEvolution might be an example of how this can be
Michael > done - it creates a map from IR instructions to SCEV-objects.

Our thinking is that what we'd like to communicate between VecAnalysis and VecTransform
is not simple enough to represent well in metadata form, in a long run. As such, we are currently
going after creating an internal data structure (which eventually will become data structure of
VectAnalysis, to be referenced from VecTransform through member functions). As I wrote before,
since we have a need to represent a new control flow (within Analysis), this internal data structure we
are introducing is an abstraction of Basic Block, and soon-to-come NFC patch essentially stops there.
Next step is to add a new control flow for new optimization/functionality (that's useful enough in
LoopVectorize.cpp). At that moment, we inevitably have to represent newly generated "instructions"
in an abstracted way and the abstracted Basic Blocks start to diverge from underlying real Basic Blocks.
One might call this "a shadow data-structure". Once RFC and the patches comes out, I hope enough of
you will like the approach we are taking. We'll find out at that time.

I think enough is said before real RFC and NFC patch. Let me get back to RFC/patch so that you'll
see them sooner than later. In the meantime, I'll be glad if more people express their likes/dislikes
on our general approach.

Thanks,
Hideki

Hi Hideki,

Thanks for sharing the roadmap, I'm curious as to how this shadow
basic block will look like. :slight_smile:

cheers,
--renato