TargetTransformInfo getOperationCost uses

Hi,

I’m trying to implement the TTI hooks for AMDGPU to avoid unrolling loops for operations with huge expansions (i.e. integer division).

The values that are ultimately reported by opt -cost-model -analyze (the actual cost model tests) seem to not matter for this. The huge cost I’ve assigned division doesn’t prevent the loop from being unrolled, because it isn’t actually considered when loop unrolling.

The loop unrolled uses CodeMetrics, which via getUserCost ultimately uses TargetTransformInfoImplBase::getOperationCost(), which returns various fixed values (4 for division (TCC_Expensive, but this isn’t nearly expensive enough)).

getOperationCost only uses the type and opcode to estimate, so it doesn’t require a value. No target overrides this. The hooks that targets do really implement, like getArithmeticInstrCost, use some information about the operands so require a value. These don’t appear to be used at all by the cost model, and instead in specific places in some passes.

Why don’t any targets override getOperationCost or why aren’t there any other hooks that change its behavior? Having these two parallel path for operation costs is confusing, especially since I expected to be able to use the -cost-model output for testing all of the computed costs.

-Matt

Hi Matt,

So the problem here is that, as you imply, TTI has two cost models: One is used for vectorization (and has received a fair amount of attention), and one is used for inlining/unrolling, and has received less attention.

The vectorization cost model is, generally speaking, concerned with instruction throughputs, and is used to estimate the relative speed of a vectorized loop vs. the scalar one (it assumes that a proper amount of ILP is available, or created by interleaving, such that the throughputs matter much more than the latencies).

The “user” code model, used by the inliner and unroller, is concerned with estimating something more-closely related to code size (although there is obviously a desired correlation with performance). It has primarily been customized to let the inliner/unroller, on a target specific basis, understand that certain zext/sexts are free, etc.

As you also imply. there has been a lot more specific target customization work on the vectorization cost model. To be honest, I don’t think this situation is ideal (we could have one cost model that returns information on size, latency and throughput that is used by all clients). Nevertheless, hopefully this makes things somewhat clearer.

-Hal

Side note on TargetTransformInfoImplBase::getOperationCost() and TCC_Expensive:
As part of fixing https://llvm.org/bugs/show_bug.cgi?id=24818

…I made div/rem instructions default to expensive.

GPU targets should probably override those back to “TCC_Basic” so CodeGenPrepare won’t despeculate execution of those ops when they’re operands of a select.

If we continue along that path (default costs are based on typical CPUs rather than GPUs…because there are more of those targets?), I’d like to change sqrt and possibly other math intrinsics in the same way - default them to TCC_Expensive.

The vectorization cost model is, generally speaking, concerned with instruction throughputs, and is used to estimate the relative speed of a vectorized loop vs. the scalar one (it assumes that a proper amount of ILP is available, or created by interleaving, such that the throughputs matter much more than the latencies).

Note that the loop vectorizer also acts as an unroller, and considers other costs of register pressure and expanding reductions in doing so.

The SLP vectorizer should also consider latency costs in latency-bound situations, rather considering only throughput costs, as observed in PR25108.

Ayal.