Instruction Cost

Hi,

I’m looking for APIs that compute instruction costs, and noticed several of them.

  1. A series of APIs of TargetTransformInfo that compute the cost of instructions of a particular type (e.g. getArithmeticInstrCost and getShuffleCost)

  2. TargetTransformInfo::getOperationCost

  3. CostModel::getInstructionCost::getInstructionCost in lib/Analysis/CostModel.cpp

Only the first one is used extensively in LLVM’s code base, but the second and third one seems more recently added and more general, making me think if they are designed to replace the first approach.

So, what’s the recommended way to compute instruction costs?

Thanks,
Jingyue

It depends on what for...

#3 above is a *very* coarse, but target independent heuristic. It's used
for basic things that don't need a lot of detail at all.

#1 and #2 are both in TTI and useful to target-specific IR transforms such
as the vectorizers and the partial unroller. I would expect #2 to be
implemented in terms of #1 and generally just a wrapper that provides a
generic interface.

From: "Jingyue Wu" <jingyue@google.com>
To: "LLVM Developers Mailing List" <llvmdev@cs.uiuc.edu>
Sent: Wednesday, January 14, 2015 5:54:26 PM
Subject: [LLVMdev] Instruction Cost

Hi,

I'm looking for APIs that compute instruction costs, and noticed
several of them.

...

So, what's the recommended way to compute instruction costs?

It depends on how you intend to use them.

1. A series of APIs of TargetTransformInfo that compute the cost of
instructions of a particular type (e.g. getArithmeticInstrCost and
getShuffleCost)

These are used by the vectorizers, and roughly, return costs in reciprocal throughput (the higher the throughput the lower the cost).

2. TargetTransformInfo::getOperationCost

This is used by TTI::getUserCost, and it intended to provide a cost appropriate for balancing size and performance (it is used by the inliner's cost model, etc.).

3. CostModel::getInstructionCost::getInstructionCost in
lib/Analysis/CostModel.cpp

This is a wrapper around the interfaces in class (1) (getArithmeticInstrCost, etc.) and may or may not be useful for any particular use case. This is used primarily to construct regression tests for the cost models.

Only the first one is used extensively in LLVM's code base, but the
second and third one seems more recently added and more general,
making me think if they are designed to replace the first approach.

Hopefully this makes things clearer :wink:

-Hal

CostModule::getInstructionCost also consults TTI (http://llvm.org/docs/doxygen/html/CostModel_8cpp_source.html#l00380). No?

Jingyue

From: "Jingyue Wu" <jingyue@google.com>
To: "Chandler Carruth" <chandlerc@google.com>
Cc: "LLVM Developers Mailing List" <llvmdev@cs.uiuc.edu>
Sent: Wednesday, January 14, 2015 6:09:10 PM
Subject: Re: [LLVMdev] Instruction Cost

CostModule::getInstructionCost also consults TTI (
http://llvm.org/docs/doxygen/html/CostModel_8cpp_source.html#l00380
). No?

Yes, and is used to create regressions tests for (1). We should also extend it to make creating regression tests for the things in (2) easier, but no one has done that yet (those costs are also much less well tuned).

-Hal

Sorry, I confused this with something else. Ignore me!

Do you think we could replace this with data from the CPU scheduling models
(if it exists in a particular CPU's model)?

One number for all CPUs may not always be adequate:
http://llvm.org/bugs/show_bug.cgi?id=21356

From: "Sanjay Patel" <spatel@rotateright.com>
To: "Hal Finkel" <hfinkel@anl.gov>
Cc: "Jingyue Wu" <jingyue@google.com>, "LLVM Developers Mailing List" <llvmdev@cs.uiuc.edu>
Sent: Wednesday, January 14, 2015 7:00:57 PM
Subject: Re: [LLVMdev] Instruction Cost

> 1. A series of APIs of TargetTransformInfo that compute the cost of
> instructions of a particular type (e.g. getArithmeticInstrCost and
> getShuffleCost)

These are used by the vectorizers, and roughly, return costs in
reciprocal throughput (the higher the throughput the lower the
cost).

Do you think we could replace this with data from the CPU scheduling
models (if it exists in a particular CPU's model)?

In part, but you'd still need a mapping from IR -> MI instruction(s). You might be able to derive this from the TableGen patterns, but that will only work for simple things (likely not shuffles, for example).

-Hal

Hi,

Thanks all for replying! I’ll try the CostModel class first.

Jingyue

Hi,

I'm using the code of the tool "lli" embedded in a process of a controll application.

After the second JIT compile action I see the following messages:

first JIT compile .. with a dump of the actual parameters:

xargv: -load
xargv: libqwt.so
xargv: -load
xargv: libosz.so
xargv: -entry-function=main
xargv: -use-mcjit
xargv: source.bc -> the jitted application code works !

second JIT compile:

xargv: -load
xargv: libqwt.so
xargv: -load
xargv: libosz.so
xargv: -entry-function=main
xargv: -use-mcjit
xargv: source.bc
dV: for the -entry-function option: may only occur zero or one times!
dV: for the -use-mcjit option: may only occur zero or one times!
<input bitcode> option: may only occur zero or one times!

The execution engine is deleted after the first JIT action ... but it seems not to be sufficient in order to run multiple JIT compiles.

What's to do additionally to reset (or re-initialize) lli ?

Regards

Armin

Hi Armin,

Argument parsing isn’t handled by the JIT. This sounds like you’re making redundant calls to cl::ParseCommandLineOptions ?

Deleting the ExecutionEngine and any RTDyldMemoryManager instances that you’ve created should be enough to reset the JIT.

Cheers,
Lang.

Hi Lang,

sorry for the late response, I tried without success to solve the trouble with the repeated processing of a command line.
The implementation of the command line parser is a real hack ... IMHO.

I'm simply trying to build a server process for JIT compiling /execution of a bitcode module using individual command line options (in server loop).

Do you have an idea how to reset the comand liner parser ?

Regards

Armin

Lang Hames schrieb:

Hi Armin,

Today there is no way to reset command line options to their defaults. The implementation of options doesn’t really have a concept of defaults, the default value is just what the option is initialized to. Once you set an option the default is gone, so it can’t be restored.

I’m currently working on cleaning this all up, and the eventual goal is to completely replace the existing option parsing an handling code. The new design does allow for resetting options to their defaults.

-Chris