Different optimized IR with opt vs. ModulePassManager

I’m currently seeing different results between running a ModulePassManager with the default O3 optimization passes and opt -passes='default<O3>' on the same IR.

Specifically, I have a ModulePassManager initialized as in Using the New Pass Manager — LLVM 16.0.0git documentation with

MPM = PB.buildPerModuleDefaultPipeline(llvm::PassBuilder::OptimizationLevel::O3);

that I apply to an IR module with MPM.run(module, MAM).

However, when I output the same module before optimization with

module->print(llvm::errs(), nullptr, false, true);

and run opt -passes='default<O3>' -S on the output, I get a different result - despite (from a cursory read of the opt source) using the same default pipeline without modifications!

Is this expected? Does opt modify the default pipelines in some way I haven’t caught, or am I doing something wrong in applying the default pipeline? I do see some IR changes from my MPM.run call, so the pipeline is having some effect, just not the same effect as running through opt.

For anyone else who encounters this problem: opt does a few things differently that seem to matter:

  • It initializes its PassBuilder with an explicit TargetMachine and PipelineTuningOptions with vectorization enabled
  • It sets its TargetMachine’s CodeGenOpt::Level to Aggressive
  • It explicitly adds an alias analysis pass to its FunctionAnalysisManager

With these changes in place, I generate IR that appears to be identical to opt’s output with the default O3 pipeline.

1 Like