question on LSRInstance::OptimizeLoopTermCond()

This optimization is never useful on my target architecture. There doesn't appear to be a TTI tuning knob to turn it off. Any ideas on how to do that?


Having not scrutinized the code too closely, I bet there's a bunch of code that assumes this optimization has been done and attempts to pattern match for it. If you were to prevent it from running, I bet you'd see a bunch of other optimizations not happen. An easy way to test this is to just comment out the one call to it and see what happens.

That said, if this optimization is not beneficial, perhaps you could add a pass to your backend that does the opposite?