Performace regression in llvm::ExecutionEngine::finalizeObject

Hello,

I am working on upgrading an app from using llvm 8.0.1 to llvm 10.0.

I create a JIT llvm::ExecutionEngine(i386 CPU, NONE codegenopt) which contains a single module.

Below are some numbers for call to llvm::ExecutionEngine::finalizeObject:
llvm 8.0.1 - less than 0.5 seconds
llvm 10.0 - greater than 170 seconds

Using Vtune, was able to trace the regression to llvm::SDNode::hasNUsesOfValue(which is called by llvm::X86TargetLowering::PerformDAGCombine)

I want to check if anyone has faced a similar problem or has any ideas what the root cause/fix might be?

Thanks,
Gaurav

Hi Gaurav,

I don’t think the default JIT optimization settings for ExecutionEngine have changed, so if you didn’t change your JIT config either then this probably isn’t a JIT bug.

Was the whole of the regression due to slowdowns in PerformDAGCombine? Was there any substantial change to the IR coming in to CodeGen between LLVM 8 and LLVM 10 for your test case?

If you don’t find answers in this thread I’d be inclined to ask again and phrase it as a CodeGen regression question: That might catch more people’s attention.

Regards,
Lang.