Adding Extended-SSA to LLVM

Hey folks,

After a long amount of discussion both offline and on, I put a pass/intrinsic to add extended SSA up at http://reviews.llvm.org/D29316.

Sean asked me to share it more broadly on llvm-dev, so here you go :slight_smile:

For those not familiar with extended-SSA, it’s described in the paper “ABCD: Eliminating Array Bounds Checks on Demand”.

There is a very large amount of explanation in the summary of the review that i don’t want to paste here , but the short version is:

  1. We make copies of variables where they are used in comparisons that lead to assumes or branches and it’s possible to determine something about their value from the branch.

IE

define i32 @test1(i32 %x) {
%cmp = icmp eq i32 %x, 50
br i1 %cmp, label %true, label %false
true:
ret i32 %x

false:
ret i32 1
}

becomes

define i32 @test1(i32 %x) {
%cmp = icmp eq i32 %x, 50
br i1 %cmp, label %true, label %false

true: ; preds = %0
; Has predicate info
; branch predicate info { TrueEdge: 1 Comparison: %cmp = icmp eq i32 %x, 50 }
%x.0 = call i32 @llvm.predicateinfo.i32(i32 %x)
ret i32 %x.0

false: ; preds = %0
ret i32 1
}

All uses that are dominated by the predicate info are renamed to use it.

  1. We do so very quickly (it takes about 600ms to do 2 million blocks with comparisons, by comparison, most passes simply crash or take forever on the same file. As an example, GVN takes 500 seconds), and only insert when and where the operands are used.

  2. The intrinsics are marked so they do not affect optimization (and currently, passes that use them all destroy them). They also can detect when they’ve been moved to make sure they are still valid if need me.

  3. They could be updated pretty easily if we wanted

With one real downside:

  1. We modify the IR in an analysis to do so :frowning:

The main user at the moment is NewGVN, which uses it to do the equivalent of GVN’s propagateEquality. I’d be happy to make it a utility called from NewGVN instead of an analysis. A number of folks online and off asked me to make it usable for others, so i did that :slight_smile: Some folks want to use it to cleanup existing passes (EarlyCSE, etc), some want to use it in new passes.

FWIW: A number of alternate approaches were tried for NewGVN. The review details the different approaches other passes take and their tradeoffs. Three approaches have been tried for NewGVN over the years prior to mainline submission (NewGVN deliberately tries to be an analysis followed by elimination, unlike our existing passes, which try to eliminate as they go).
This one is the clear winner in terms of speed, simplicity, maintainability, and what it covers.

Thoughts welcome,
Dan

Hi Daniel,

I didn't read the discussion in the review but would like to ask one
question anyway. In case it was already discussed elsewhere please just
point me there.

Why do you want to put this information in the IR instead of recomputing
or preserving it like we do with other analysis results that live only
in memory?

Cheers,
  Johannes

Hi Daniel,

I didn't read the discussion in the review but would like to ask one
question anyway. In case it was already discussed elsewhere please just
point me there.

FWIW: It's discussed in detail in the review (i detailed the approaches
other passes take to this problem, and one of our goals is precisely to not
do it as slowly as they do it :P). That said, happy to answer :slight_smile:

Why do you want to put this information in the IR instead of recomputing
or preserving it like we do with other analysis results that live only
in memory?

The short version is it's very slow, and you can't actually do this in a
way that is sane in an analysis. It's been tried 3 times :slight_smile:

We also already have issues in number of analysis that try to do this, with
a significant number of open bugs due to the performance of trying do it
this way.

IE this approach works really really well for some analysis, but is really
really bad for others. This is going to be one of the latter ones :slight_smile:
One of the reasons is fairly simple:

MemorySSA, which does this, and works well, works well because there is no
IR for memory. So there are no defs/uses it's replacing.

However, this analysis is based regular ssa, so overlaying a completely
different set of defs and uses does not mesh well with existing
infrastructure.

(analysis that is not trying to give def/use style info tend to work well)

But that's precisely what we need to do in order to know what affects a
given predicate. Thus, what you would build would essentially be a
duplicate copy, in memory, of the existing IR, with the names changed (you
can throw away some info, obviously). Alone, this gets very slow because
you'd have to walk through and duplicate sometimes thousands of uses/users
into your in-memory structure, which you have to do at a minimum to be able
to tell the user what uses are affected by a given piece of predicate info.

Even when you do this, it in turn is hard to use in passes, because they
all expect one name (really, Value *) to map to one value. This is in
fact, the whole underlying problem one is trying to solve. Here, even if
you made those fake def/uses Value *'s, it is still very tricky to not make
a mess of things.
Our passes that try (GVN, EarlyCSE) do a combination of various scoped hash
tables and eager IR replacement as they go to avoid this issue, but since
NewGVN is meant itself to be split as an analysis/eliminator, we can't do
IR replacement as we go, and the former is very mess to make work in an
analysis *and* still catch all the cases.
I can point you at a branch on github where we implemented the exact
approach GVN takes now, without modifying the IR, just tracking what the
effects are. To say it is a complex mess is an understatement. it's also 3x
as slow as the predicateinfo approach in the base case. Not to mention
predicateinfo required just a small amount of code to do stuff with the
compare info (and that really should be a utility), whereas the other
approach was a couple thousand lines.

Additionally, unlike MemorySSA, this could pretty much never be kept up to
date, because it needs to be updated for any modification to the IR. This
is true regardless of *invalidation* (IE most changes don't change the
correctness, but they still have to be applied)

Given that, it doesn't, IMHO, make sense to try to do that.
Also, i tried it anyway, and it's ... significantly slower, memory
intensive, and not updatable :slight_smile:

That doesn't make it impossible, mind you. It's definitely doable, but
it's fairly complex vs doing this, and i have trouble seeing the benefits.

Note that i'm not the only person to go through this exercise:

GCC builds a nearly identical form when doing VRP (see the part where it
builds ASSERT_EXPR, which is identical except they directly attach
comparisons, where we don't because it constrains our ability to place
predicateinfo for assume, which gcc doesn't have).
It then drops the form.

A number of other compilers do similar (Jikes, for example, did this).

Some now even go farther. For example, last i looked, Graal builds pruned
SSI.

This is all a short way of saying "yeah, i feel like we tried all the other
approaches first" :stuck_out_tongue:

I see :wink:

Thank you for this detailed summary that not only answers my question
but also all the follow-up questions I can think of right now.

Cheers,
  Johannes

Hi Daniel,

Many thanks for working on this!
SSI/e-SSA is the only way I'm aware of for doing efficient sparse analyses, so I'm definitely in favor of adding support for it in LLVM!

I read the discussion so far and did a cursory review of the patches, and I have just a few questions:
- Is your plan to keep e-SSA form all the time (do it once and maintain it throughout the pipeline), or do it in the passes that need it and then get of it at end of the pass? My concern (similar to Sanjoy's) is the increased overhead for matching patterns and so on. For example, how would instcombine work? Would we change the m_*() matchers to know how to see through the intrinsic?

- What you've implemented never creates phi nodes, right? (as opposed to ABCD paper)
At least it seems that the current algorithm seems to bail out if the successor has multiple predecessors. This might be an ok abstraction for GVN, but for things like CVP it's probably not. CVP merges information incoming from multiple edges (as any other fancier abstractions we may want to have in the future will).
Any plans to support this? (i.e., is the current algorithm "easily" upgradable to support this scenario?)

- For a function/intrinsic marked as "returned(1) readnone", the obvious optimization to do would be to RAUW the return with the argument (and kill the function call altogether -- assuming readnone allows that; but I lost track of that discussion). Does instcombine do that? I understand that piggybacking on this existing feature is a good thing, but I'm just wondering if, say, InstCombine won't simply revert e-SSA?

- In processBranch/processAssume you also consider branches on ANDs/ORs of comparisons, and then each comparison is processed individually. However, this seems incorrect for one of the branches:
if (a && b) {
...
} else {
  // holds: !a OR !b
  use(a)
  use(b)
}

Right now it seems that the information that is attached to the else branch is !a AND !b, which would be incorrect. I haven't seen the client of this analysis, so this is just speculation, but the interface doesn't seem to have any indication for this case. Or maybe I'm misreading the code :slight_smile:

- Now slightly unrelated: do you know of other representations capable of handling relational analyses? For example, with e-SSA:
if (x+y > 0) {
  // predicate info attached to 'x+y' only
  use(x)
}

So at use(x) no information will be available, even if we know that FWIW 'x+y > 0' holds there. I don't think LLVM has any analysis that can track these kind of symbolic relations, but I was just curious if there's anything out there that can handle such analyses?

Thanks,
Nuno

Hi Daniel,

Many thanks for working on this!
SSI/e-SSA is the only way I'm aware of for doing efficient sparse
analyses, so I'm definitely in favor of adding support for it in LLVM!

I read the discussion so far and did a cursory review of the patches, and
I have just a few questions:
- Is your plan to keep e-SSA form all the time (do it once and maintain it
throughout the pipeline), or do it in the passes that need it and then get
of it at end of the pass?

At the moment, get rid of it.

My concern (similar to Sanjoy's) is the increased overhead for matching
patterns and so on.

So far i have measured exactly none, FWIW.

I'm also not worried because given how much we try to match tons of useless
variants, i'm positive i could make it net zero compile time no matter what
happened by removing wasted matching time :slight_smile:

For example, how would instcombine work?

Assuming you wished to use it there, the intrinsic already has the
returned attribute said, which states, specifically, that it always returns
its first argument.

If instcombine doesn't take advantage, it already has a problem with
intrinsics marked with the returned attribute :slight_smile:
(though yeah, they currently only exist in backends)

As for how you would make it work, there is no magic, of course
We either change the matchers to see through it or we don't.
Both are valid options, with their own tradeoffs. Nobody has yet
approached me about adding it to instcombine, so i haven't tried to
formulate an opinion which way i'd go :slight_smile:

Note that other intrinsics, such as noalias, have the same issue.

Would we change the m_*() matchers to know how to see through the intrinsic?

- What you've implemented never creates phi nodes, right? (as opposed to
ABCD paper)

Correct.
It originally did, but it's not possible to sanely create phi nodes for
assumes in all cases.

At least it seems that the current algorithm seems to bail out if the

successor has multiple predecessors.

This is because it's not possible to place info in such cases without phi
nodes, and even then it's not clear that it makes sense.

Also note that such cases are all critical edges (otherwise i can't see
how they have info to propagate :P), and if you break the critical edges,
it works just fine.

The need to split critical edges is pretty much true for maximal results
for any optimization.

This might be an ok abstraction for GVN, but for things like CVP it's
probably not. CVP merges information incoming from multiple edges (as any
other fancier abstractions we may want to have in the future will).

It's important to note: we sort phi node uses into the predecessor block
they belong to, so that restriction does *not* apply to the typical phi
node use case.

IE given:
define i32 @test12(i32 %x) {

   %cmp = icmp eq i32 %x, 0
   br i1 %cmp, label %cond_true, label %cond_false

cond_true:
   br label %ret

cond_false:
   br label %ret

ret:
   %res = phi i32 [ %x, %cond_true ], [ %x, %cond_false ]
   ret i32 %res
}

You will get:
; CHECK-LABEL: @test12(
; CHECK-NEXT: [[CMP:%.*]] = icmp eq i32 [[X:%.*]], 0
; CHECK-NEXT: br i1 [[CMP]], label [[COND_TRUE:%.*]], label
[[COND_FALSE:%.*]]
; CHECK: cond_true:
; CHECK-NEXT: [[X_0:%.*]] = call i32 @llvm.ssa.copy.i32(i32 [[X]])
; CHECK-NEXT: br label [[RET:%.*]]
; CHECK: cond_false:
; CHECK-NEXT: [[X_1:%.*]] = call i32 @llvm.ssa.copy.i32(i32 [[X]])
; CHECK-NEXT: br label [[RET]]
; CHECK: ret:
; CHECK-NEXT: [[RES:%.*]] = phi i32 [ [[X_0]], [[COND_TRUE]] ], [
[[X_1]], [[COND_FALSE]] ]
; CHECK-NEXT: ret i32 [[RES]]

(as you can see, we test this)

So the cases i think you are thinking about, are not problems except in one
degenerate case, which is where critical edges lead directly to the use.

In such cases, note that the merges it performs can be proved to miss
information or be non-optimal in the case of critical edges, even as it
tries now.

All that said ...

Any plans to support this? (i.e., is the current algorithm "easily"
upgradable to support this scenario?)

It is easy to do, but it is ... messy, IMHO.

Note that such a thing is generally a mess.

Here is the degenerate case:

As an example:
define i32 @f1(i32 %x) {
bb0:
   %cmp = icmp eq i32 %x, 0
   br i1 %cmp, label %bb2, label %bb1
bb1:
   br label %bb2
bb2:
   %cond = phi i32 [ %x, %bb0 ], [ %x, %bb1 ]
   %foo = add i32 %cond, %x
   ret i32 %foo

}

the critical edge from bb0 to bb2 causes us to have no place to place
predicateinfo in the current algorithm for the true branch (we will
placefalse info).

You could also always place it before each branch its for (because that
block should dominate all uses in the conditional edges already) .

The actual placement is irrelevant, BTW. In the renaming algorithm, by the
time it goes to place them, it already knows what it *should* dominate.

The only thing it gets "wrong" is the phi uses, because they are placed in
the predecessor blocks right now as we sort.

But because we see them there, we can detect this case and change
placement.
Because they are guaranteed to be at the beginning of the successor block,
you are guaranteed that, if you want to insert, the only thing that can be
between them and the def you want to insert there is other phi uses in the
same boat.
So you can lookahead in the stream, find that def, insert it, use it, and
pretend everything's great (without even pushing it onto the stack!)

This is tricky in a one-pass algorithm, as it's really changing a simple
renaming automaton into something that has two modes "critical phi use" and
"regular" mode. In critical phi use mode, it finds the def, inserts it, and
keeps processing until it hits a non-phi use. Then it shifts back into
regular mode.
But there's also only exactly one case all of the above work affects:

The case where the phi node with the use is a direct successor of the
branch, such that the edge to that use is critical.

In any case, this is also precisely the problem that splitting critical
edges resolves, and if you use break-crit-edgse on the above, you get right
answers all the time :slight_smile:

Note also that the thing that we replace, propagateEquality in GVN, also
restricts itself to single predecessors, and simply attempts to propagate
directly into critical phi node uses to avoid the issue (which we can't do
since NewGVN is an analysis followed by an eliminator).

So yes, we can fix this case if we need to.

- For a function/intrinsic marked as "returned(1) readnone", the obvious
optimization to do would be to RAUW the return with the argument (and kill
the function call altogether -- assuming readnone allows that; but I lost
track of that discussion). Does instcombine do that?

I understand that piggybacking on this existing feature is a good thing,

but I'm just wondering if, say, InstCombine won't simply revert e-SSA?

NewGVN is the only pass that uses it ATM, and it destroys the form.
The current expectation is that anything that uses it will destroy it.
This is the deliberate outcome of this discussion right now :slight_smile:

If it becomes slow enough that we want to keep it up to date, IMHO, we can
burn that bridge when we come to it.

At the point at which we are trying to keep predicateinfo up to date in a
large number of places, i'd argue we should just move to e-ssa/ssi as the
default and be done with it. Since that's what you'll have effectively
done anyway.

I think it would be reasonable to see where it gets used, and if enough
places, make a decision what to do.

- In processBranch/processAssume you also consider branches on ANDs/ORs of
comparisons, and then each comparison is processed individually. However,
this seems incorrect for one of the branches:
if (a && b) {
...
} else {
// holds: !a OR !b
use(a)
use(b)
}

Right now it seems that the information that is attached to the else
branch is !a AND !b, which would be incorrect.

I could be pedantic and say the only information attached is a comparison,
the branch, and whether the edge is the true edge or the false edge :slight_smile:
Which is correct.

It also chains the info to give you the entire string of comparisons that
were applied.

However, in practice you are right, and clients are not likely to get this
right with what i've given them.

Since in practice, the only useful derived info is in the true branch of
and, and the false branch of or, i'm going to make it not attach info to
the other branch.

Unless you can see a case it makes sense to?

I haven't seen the client of this analysis, so this is just speculation,
but the interface doesn't seem to have any indication for this case. Or
maybe I'm misreading the code :slight_smile:

No, you are correct. It just happens that the only client is smart enough
to do something sane (i think :P)

- Now slightly unrelated: do you know of other representations capable of
handling relational analyses? For example, with e-SSA:
if (x+y > 0) {
// predicate info attached to 'x+y' only

use(x)

}

We could do that with this pass, actually.

It's a question of where you terminate the operand-finding.

You could actually just make it DFS the comparison operation and collect
operands until you hit all the leaves, and insert for those.This would be a
simple modification to collectCmpOperands.
(it will still be smart enough to only insert if there are actual uses
affected by info).

We don't do that ATM, but there isn't anything i can see that would stop
you.

The goals i set were to replace propagateEquality in GVN with NewGVN, so we
pretty much only produce info that we can directly propagate.

It all depends on how many names you want to generate.

I believe the study i saw, which tried splitting at different places, came
to the conclusion that the best "bang for buck" was e-ssa.

IE the percent more you get from more was not worth the cost.

But when i spoke with them about it, their biggest time-waste was actually
"we insert a lot of useless phis for operations that never get used,
because computing liveness is hard". But as i've shown here, you don't
even need to explicitly compute it to solve that problem. So who knows,
maybe it does pay to split everywhere once that cost is eliminated.

I'd put this one in the "evaluation necessary".

So at use(x) no information will be available, even if we know that FWIW

'x+y > 0' holds there. I don't think LLVM has any analysis that can track
these kind of symbolic relations, but I was just curious if there's
anything out there that can handle such analyses?

The folks i know who do this, in fact start with e-ssa and go from there,
with the caveats i listed :slight_smile:

(and D29519 was updated to not insert in and/or on the “wrong” edges, and tests added to ensure it happens)

Thanks for the answers! The plan makes sense to me.

Regarding phis, what about diamonds, e.g.:

define i32 @f(i32 %x) {
   br .., label %bb0, label %bb1
bb0:
   %cmp = icmp sge i32 %x, 0 ; x > 0
   br i1 %cmp, label %bb2, label %bb3
bb1:
   %x2 = add nsw nuw %x, 1
   %cmp2 = icmp sge i32 %x2, 2 ; x+1 > 2 / x > 1
   br i1 %cmp2, label %bb2, label %bb3
bb2:
   %x3 = phi i32 [ %x, %bb0 ], [ %x2, %bb1 ]
   ; CVP says: %x3 is > 0
   ...
   br label %bb3
bb3:
    ...
}

CVP can infer that %x > 0 because the union of the intervals given to the phi node imply that.
Sure, we can split those edges, but maybe adding the predicate info to blocks bb0 & bb1 would solve the problem?

- In processBranch you also consider branches on ANDs/ORs of comparisons, and then each comparison is processed individually. However, this seems incorrect for one of the branches:
if (a && b) {
...
} else {
// holds: !a OR !b
use(a)
use(b)
}

Right now it seems that the information that is attached to the else branch is !a AND !b, which would be incorrect.

I could be pedantic and say the only information attached is a comparison, the branch, and whether the edge is the true edge or the false edge :slight_smile:
Which is correct.

It also chains the info to give you the entire string of comparisons that were applied.

However, in practice you are right, and clients are not likely to get this right with what i've given them.

Since in practice, the only useful derived info is in the true branch of and, and the false branch of or, i'm going to make it not attach info to the other branch.

Unless you can see a case it makes sense to?

For CVP there is. For example:
if (x > 2 && x < 5) {
...
} else {
  // known: x <= 2 || x >= 5
  // CVP produces a range for x: [5, 3) (no loss of information here at all)
  if (x == 4) ... // CVP folds this to false
}

So CVP can handle (simple) disjunctive information. Other ValueTracking analyses handle simple patterns as well, though probably at this time those can't use this new stuff unless we go all in with e-SSA.
Not sure how to export the information to clients, though. Supporting arbitrary boolean combinations of comparisons seems tricky, but maybe sticking to just 1-level of and/or is ok.

I'm mentioning CVP because it *really* needs to be refactored to use e-SSA/SSI. The current code is slow, is very limited in scope (w/ somewhat arbitrary throttling), and is too complicated.

Thanks,
Nuno

Thanks for the answers! The plan makes sense to me.

Regarding phis, what about diamonds, e.g.:

define i32 @f(i32 %x) {
  br .., label %bb0, label %bb1
bb0:
  %cmp = icmp sge i32 %x, 0 ; x > 0
  br i1 %cmp, label %bb2, label %bb3
bb1:
  %x2 = add nsw nuw %x, 1
  %cmp2 = icmp sge i32 %x2, 2 ; x+1 > 2 / x > 1
  br i1 %cmp2, label %bb2, label %bb3
bb2:
  %x3 = phi i32 [ %x, %bb0 ], [ %x2, %bb1 ]
  ; CVP says: %x3 is > 0
  ...
  br label %bb3
bb3:
   ...
}

CVP can infer that %x > 0 because the union of the intervals given to the
phi node imply that.
Sure, we can split those edges, but maybe adding the predicate info to
blocks bb0 & bb1 would solve the problem?

Right, and these are all critical edges, as you say :slight_smile:
I will actually try a trick where we insert the def twice and try to place
them before phi node uses.
We sort phi node uses by incoming block, so they should end up next to each
other.

- In processBranch you also consider branches on ANDs/ORs of comparisons,

and then each comparison is processed individually. However, this seems
incorrect for one of the branches:
if (a && b) {
...
} else {
// holds: !a OR !b
use(a)
use(b)
}

Right now it seems that the information that is attached to the else
branch is !a AND !b, which would be incorrect.

I could be pedantic and say the only information attached is a
comparison, the branch, and whether the edge is the true edge or the false
edge :slight_smile:
Which is correct.

It also chains the info to give you the entire string of comparisons that
were applied.

However, in practice you are right, and clients are not likely to get
this right with what i've given them.

Since in practice, the only useful derived info is in the true branch of
and, and the false branch of or, i'm going to make it not attach info to
the other branch.

Unless you can see a case it makes sense to?

For CVP there is. For example:
if (x > 2 && x < 5) {
...
} else {
// known: x <= 2 || x >= 5
// CVP produces a range for x: [5, 3) (no loss of information here at
all)
if (x == 4) ... // CVP folds this to false
}

Okay, so it does try to handle simple disjunctions.

So CVP can handle (simple) disjunctive information. Other ValueTracking
analyses handle simple patterns as well, though probably at this time those
can't use this new stuff unless we go all in with e-SSA.
Not sure how to export the information to clients, though. Supporting
arbitrary boolean combinations of comparisons seems tricky, but maybe
sticking to just 1-level of and/or is ok.

I mean, we can certainly mark and link the info however we want. I'm just
not sure what the best way that clients want to use it is yet.

Let me think about this as a followup, since it at least is now "correct"
to obvious clients

I'm mentioning CVP because it *really* needs to be refactored to use
e-SSA/SSI. The current code is slow, is very limited in scope (w/ somewhat
arbitrary throttling), and is too complicated.

Note that with patches to do LVI in DFS postorder instead of BFS order, it
actually should be close to ideal :slight_smile:

If CVP moves forward and queries LVI in RPO order, and LVI is doing PO, it
should be as close to O(1) work per LVI call as you can get.

Of course, it's still a mess, code wise, but ...

D29606 (a patch on top of the current patch set) handles all the critical edge and diamond cases you mentioned here.
It’s a bit more complex to handle in one pass, so i did it as a patch set on top of the existing reviews so it can be reviewed separately.

I have not yet started to link together conjuctive/disjunctive info in a way clients know that it is conjunctive/disjunctive
(but that seems easy enough).

Cool, thanks!
I just read the tests and they look good. I'll review the algorithm changes in the next few days, but feel free to commit the patch in the meantime.

Nuno