Instruction selection algorithm

Is the algorithm described in the article "Near-Optimal Instruction
Selection on DAGs
(https://llvm.org/pubs/2008-CGO-DagISel.html)" really used in llvm
instruction selection?
I've studied implementation (SelectionDAGISel.cpp) and I see that
instructions are selected
by target specific MatcherTable generated by llvm-tblgen. In the
implementation the first
matching pattern from MatcherTable is selected but in the paper all
matching patterns (in paper
named tiles) are selected for each DAG node and cost estimation is
performed for every tile and
node. Then the best tiles are selected by dynamic programming.

Could you please explain me how llvm instruction selection relates to
the NOLTIS algorithm
in the paper?

Does it make sense to use dynamic programming for instruction
selection based on MatcherTable?

Hi Ivan,

Matcher table generation which is implemented in utils/DAGISelEmitter.cpp does use heusiristics like number of instructions which a pattern will cover, latency (not the one which Targets scheduling defines) while emitting the candidate patterns for a give dag node.

Current implications may not be implication of algorithm in toto though.

Thanks,
Jatin

LLVM performs a greedy, bottom-up instruction selection. At each step, it selects the pattern that will absorb the most nodes (roughly: the order can be tweaked by the target using AddedComplexity, which is often used to model the idea that a particular pattern is more profitable than it would otherwise appear).

I don’t personally think there is that much to gain from an algorithm significantly better than greedy; in practice most optimization goals seem to be representable through heuristics on the regular selection algorithm, at least for real machines. Additionally, I kind of suspect that there is far more to gain from better transformations during instruction lowering (e.g. in Combine1/Legalize/Combine2) than from better ordering in Select().

—escha