Cleaning up ‘br i1 false’ cases in CodeGenPrepare


I have come across a couple of cases where the code generated after
CodeGenPrepare pass has "br i1 false .." with both true and false
conditions preserved and this propagates further and remains the same
in the final assembly code/executable.

In CodeGenPrepare::runOnFunction, ConstantFoldTerminator (which
handles the br i1 false condition) is called only once and if after
the transformation of code by ConstantFoldTerminator() and
DeleteDeadBlock() we end up with code like "br i1 false", there is no
further opportunity to clean them up. So calling this code under
(!DisableBranchOpts) in a loop until no more transformations are made
fixes this issue. Is this reasonable ?

My simple fix (without any indentation changes) is:

--- a/llvm/lib/CodeGen/CodeGenPrepare.cpp
+++ b/llvm/lib/CodeGen/CodeGenPrepare.cpp

@@ -316,7 +316,9 @@ bool CodeGenPrepare::runOnFunction(Function &F) {

   if (!DisableBranchOpts) {
+ MadeChange = true;
+ while (MadeChange) {
     MadeChange = false;
     SmallPtrSet<BasicBlock*, 8> WorkList;

     for (BasicBlock &BB : F) {

@@ -352,6 +354,7 @@ bool CodeGenPrepare::runOnFunction(Function &F) {
     EverMadeChange |= MadeChange;
+ }
   if (!DisableGCOpts) {
   SmallVector<Instruction *, 2> Statepoints;

Testing the patch, I got a regression with
llvm/test/CodeGen/AMDGPU/nested-loop-conditions.ll. I am unsure if
this requires remastering the test case to adjust with the new results
or if this is a real issue. I don't have any expertise with GPU and
so any inputs in this regard would be very helpful.

testcase.ll : Original test case for this issue
base.s and new.s: llc output for the failing case
(nested-loop-conditions.ll) before and after applying the above patch.


testcase.ll.txt (4.79 KB)

base.s (5.5 KB)

new.s (5.11 KB)

I would expect the precise case you're running into is rare: the second iteration of the loop does nothing useful unless the IR specifically has an i1 phi node in a block whose predecessors were erased. And the default optimization pipeline runs SimplifyCFG at the very end, which is close to CodeGenPrepare, so the CFG simplification will usually be a no-op anyway.

We really shouldn't be doing this sort of folding in CodeGenPrepare in the first place. It looks like it was added to work around the fact that we we lower llvm.objectsize later than we should.


we lower llvm.objectsize later than we should

Is there a well-accepted best (or even just better) place to lower objectsize? I ask because I sorta fear that these kinds of problems will become more pronounced as, which is also lowered in CGP, gains popularity.

(To be clear, I think it totally makes sense to lower is.constant and objectsize in the same place. I’m just saying that if the ideal piece of code to do that isn’t CGP, …)

After the “simplification” part of the optimization pipeline (after we’ve finished inlining and the associated function simplification passes have run), we’re unlikely to find new information that would help simplify an llvm.objectsize or call. So roughly around EP_VectorizerStart is probably appropriate. But someone should measure before we move it. -Eli

But someone should measure before we move it.

I ran numbers on a large, varied codebase with an up-to-date clang-based FORTIFY implementation. With the current forced lowering in CGP, we lowered 64,662 calls to @llvm.objectsize to non-failure values and lowered 111,224 to failure values. Making the instcombine iteration after EP_VectorizerStart require that all objectsize intrinsics are lowered, we found successful values for 64,552 llvm.objectsize intrinsics, and returned failure values for 120,616.

Taken literally, we fail to lower 63.2% of calls with “successful” values today, and this change makes us fail to lower 65.1%. However, given that the earlier lowering makes us lower a little over 9,000 additional intrinsics, I’d imagine that most of these ‘new’ failures got DCE’d away before hitting CGP in the past.

In any case, I’d like to note that these numbers don’t include calls to __builtin_object_size that clang is able to lower itself, so from a clang user’s perspective, any degradation mentioned here is likely an overstatement.

Given the above, I’ve no issues with forcing @llvm.objectsize lowering to earlier in the pipeline. I have a patch to do this as part of InstCombine. Happy to make a LowerBestEffortPostOptimizationIntrinsicsPass (or whatever) specifically for this, if that would be preferable. Also happy to dig into where some of those additional objectsizes appear from if people really want.