Hi all,
We’re planning to turn on -consider-local-interval-cost for all targets which fixes some sub-optimal register allocation in certain cases. Since this is a target-independent change, we’d like to give people the opportunity to run their own numbers or raise any concerns.
The option enables a more accurate consideration of local interval costs when selecting a split candidate. It is already enabled on X86 and we’ve seen the same issue (see below) on AArch64. We expect that this would also be a (latent) issue for other targets so enabling the option for all seems like the right thing to do.
The tl;dr is that turning on this option has a small impact on compile time and shows some positives and negatives on some individual benchmarks but no change in geomean for both SPEC2017 and the LLVM test suite on AArch64. The full details are below and on https://reviews.llvm.org/D69437. The commit that added -consider-local-interval-cost is here https://reviews.llvm.org/rL323870.
Please let us know what you think.
Cheers,
Sanne