Restricting equality propagation for pointers...What's the best way? How to deal with regressions?

This is a known issue:

I recently encountered the same bug where LLVM was replacing a pointer with another one so I created D143129 in an attempt to fix the issue using alias analysis results (to target my specific bug).

@nikic suggested that the correct way to go would be be to compare the pointer’s underlying objects and that we should check IR diffs for llvm-test-suite to see the impact of this approach.

I posted the IR diffs for llvm-test-suite using -O3 in this comment here (also see the follow-up comments for some discussion): ⚙ D143129 [GVN] Do not propagate equalities for noalias pointers

Reviewers suggested that I post this on discourse to seek more opinions. I think there are three main questions here:

  1. Is there a better way to deal with this issue?
  2. Should we allow replacement of pointers with null?
  3. How should we deal with other optimization regressions?

CC: @nikic @efriedma-quic @aqjune @nlopes

1 Like

Imagine a compiler that put dead-space between each variable {both static and dynamic} !?!
Then, imagine a compiler that allocates static and dynamic data in a random order (not as written in the ASCII source.)
Are any of these problem cases “solvable” if variables get allocated in random orderings ??? or with dead space??
Then imagine a compiler for a 128-bit address space, where each variable-definition is placed in its own page all by itself (unbelievably large amounts of dead space between variables.) …

That is these optimizations are utilizing the {static and dynamic} memory layout as if it were the only way to express the semantic content of the ASCII source code. It is not, and this may be a reason why you are having difficulties.

Adding various language restrictions (restrict, volitile, …) to the source does not actually alter the underlying problem:: the source code is making assumptions that are not valid. In C parlance this enters UB territory.

Now, the LLVM compiler may take the position that “we know how our compiler lays data out in memory” in an attempt to make this UB behavior DB; but has already entered dangerous territory fro the source code.

void test(int* gp1, int* gp2)
  int g = 0;
  int a = 0, b = 0;
  int x = 7777, y = 6666; // also try swapping these
  int* p = &g;
  int* q = &g;
  int* r = &g;

Imagine the compiler allocating all the integer data {g,a,b,x,y} in a random order (legal by the language standard), and then allocated the pointers in another random order {q,p,r}, and say we end up with::
{ 0, g, 0, y, 0, q, a, 0, r, 0, b, 0, x , 0, 0, 0, p } on the stack…

int main()
int a=0, y[1], x = 0;
uintptr_t pi = (uintptr_t) &x;
uintptr_t yi = (uintptr_t) (y+1);
uintptr_t n = pi != yi;

The assignment of &y[1] to yi is making yi point to a random portion of memory. y[1] does not exist only y[0] exists. {A C language guru would probably say this falls into UB territory. It is borderline because the C language states on can calculate a pointer to be 1 element past the extent of an array, but not to dereference it. Calculate is acceptable, dereferencing it is not. Dereferencing is attempted; and therfore UB transpires.}

Is there a real use case where solving these kinds of memory layout (and aliasing) problems in the compiler “pays for itself” ???

We ran into a miscompile issue with real-world code using “restrict” that runs on Android.

A lot of the problem we see in this space are synthetic, but just because they rarely show up in real code, doesn’t mean they can’t…

Randomization doesn’t help in any meaningful way, I don’t think? We don’t make any particular promises about the ordering of variables anyway; we already reorder variables for other reasons.

Requiring padding between each variable solves some subset of the related issues, because pointers based on different variables will always compare unequal (assuming all GEPs are inbounds). But not anything involving “restrict”.

Giving null universal provenance seems a bit problematic… I don’t have any specific optimizations in mind, but the meaning of a provenance being “universal” is a bit ambiguous, depending how we model provenance. In particular, the relevant variable might not exist yet at the point where “null” is referenced.

Giving null empty provenance is fine for common cases, but there’s some edge cases in IR semantics. In particular, null_pointer_is_valid functions, and non-inbounds GEPs.

Either way, messing with the provenance of pointers that are equal to null seems unlikely to expose issues in most code; I’d prefer to land something sooner, even if it isn’t quite comprehensive. We can reconsider null semantics as we continue to refine our model of provenance.

I don’t have any great ideas here. Maybe we can use some special logic when we analyze the operands of icmp operations?

According to the reported llvm-test-suite diffs and analysis IIUC, having something like GVN that only updates a specific set of users will be helpful. For example,

// assume that p and q are int*
if (p == q) {
  *p = 10; // This must not be optimized to *q = 10 for any reason
  print(p < q + 1); // This can be optimized to `q < q + 1` because pointer comparison does not consider provenance

Or this can be done by making GVN directly do this, at the expense of making GVN more complicated.

FWIW, I agree that it’s fine to just allow replacement with null for now, and come back to this question later. It’s still better than the current state.

I think GVN is relatively well-positioned to do this, because while GVN does add the mapping to the leader table (this part wouldn’t be safe), the actual replacement mostly relies on replaceDominatedUsesWith(), so it would be viable to check there whether a given use is only used in icmp/ptrtoint (looking through select/phi).

I’m not sure whether this will cover all regressions, but it’s worth a try.

I was not suggesting that randomization helps, randomization points out the futility of trying.

I was pointing out that ::
A) a compiler is free to allocate variables any way it wants (random included), and
B) the programmer is not at liberty to assume some data-placement ordering, and
C) languages generally imply doing these things enter UB territory, and
D) accessing beyond the end of arrays is dangerous.

Which leads me to believe that the miniscule gain optimizing these is likely outweighed by the dangers in chasing down all the boundary conditions that make the optimization pliable.

There was an update in ⚙ D143129 [GVN] Restrict equality propagation for pointers to conditionally enable pointer replacements.
The idea was that, for ops that don’t care pointer provenance (e.g., pointer comparison), we could freely replace pointers as LLVM was doing before.
A link to the updated llvm-test-suite result is written at here: ⚙ D143129 [GVN] Restrict equality propagation for pointers

The number of unoptimized .ll files decreased to ~20% from the baseline, which is nice!
Also, the llvm-test-suite IR diffs show that buggy pointer replacements that might introduce undefined behavior happened in practice. For example, GVN is currently replacing this replacement in 82.ll:

  %call103 = tail call ptr @encodingLine(ptr noundef %messageIn) #17
  %22 = load ptr, ptr %t_next, align 8, !tbaa !30
  %cmp105 = icmp eq ptr %call103, %22
  br i1 %cmp105, label %if.then107, label %do.cond

  %23 = load ptr, ptr %22, align 8, !tbaa !28 // Replacing %22 with %call103 is illegal, but GVN is doing this

GVN was replacing %22 at the last load with %call103 which is a return value of call @encodingLine. This is illegal because %call103 could be an out-of-bounds pointer whereas %22 wasn’t.
@encodingLine does not have a function body as well as meaningful attributes, so inboundness of %call103 cannot be inferred.
This patch fixes the bug, which is good.

There are still corner cases where pointer replacement could be soundly performed, which might require implementation of complicated analysis.
There might also be missing cases that running llvm-test-suite could not cover.

Does anyone have idea about when we can declare that this patch is good enough?
Should we listen to performance issue reports after the bug fix patch is finally accepted, probably for a long enough period?