Writing loop transformations on the right representation is more productive

In the 2018 LLVM DevMtg [1], I presented some shortcomings of how LLVM
optimizes loops. In summary, the biggest issues are (a) the complexity
of writing a new loop optimization pass (including needing to deal
with a variety of low-level issues, a significant amount of required
boilerplate, the difficulty of analysis preservation, etc.), (b)
independent optimization heuristics and a fixed pass ordering and (c)
code explosion due to versioning. Currently, different people are
working on improving different loop optimization passes such as
LoopInterchange, LoopFuse and LoopVectorize. Even if LLVM had
'perfect' individual loop passes, they would still have the
aforementioned problems due to still being independent and the system
as a whole will be suboptimal.

Instead of each loop pass being a major component, the heavy lifting
could be done by a framework of which each transformation itself is a
small part. In this RFC, I would like to work towards a consensus on
how such a framework could look. I already outlined a possible
solution in the same presentation [1] and a publication [7], which is
partially based on a previous RFC [8]. All feedback is welcome,
including a simple '+1'.

The central idea is to use a modifiable loop tree -- similar to
LoopInfo -- as the primary representation. LLVM-IR is converted to a
loop tree, then optimized and finally LLVM-IR is generated again for
subtrees that are considered profitable. This is not a new concept, it
has already been used in compilers such as IBM XL Fortran (called ASTI
[4]) and Silicon Graphics/Open64 (called LNO [10]), and in research
such as the Value State Dependence Graph [3].

Other features include the following:

1. Candidate selection through cost functions
2. Cheap copies using Red/Green Trees
3. Application of transformations from high-level to low-level
4. Represents loops and predicates instead of CFGs
5. Data and control dependencies in one graph
6. Late fallback versioning at IR regeneration
7. Standardized API for analyses with multiple implementations
8. Abstract representation of statements
9. Expansion of use-def chains to arrays when spanning loops

To elaborate on each of these:

1. Candidate selection through cost function

I seem to have dropped the "RFC" in the title accidentally. That is,
this is meant to be a Request For Comments.

Michael

1. Candidate selection through cost function
--------------------------------------------
Instead of needing to know which transformation is profitable before
applying it, create a copy of the data structure, modify it, and
compare it to the original. Moreover, holding multiple, differently
optimized, copies allows evaluating each variant using a cost function
and select the 'best' when re-generating LLVM-IR again (or re-use the
original LLVM-IR).

This sounds a lot like VPlan.

Instantiating every possible sequence of transformations of course is
not feasible, so the search space needs to be pruned. This could be
made dependent on the optimization level.

Are you planning on using heuristic searches? This could make the
vectoriser unstable upon small input changes and therefore hard to get
consistent results and testing.

I'm not against such idea, but I think we need to be conservative in
such a core component of the compiler.

It would be nice to have -Ogocrazy to mean "keep going until you find
something", but usually, -O3 should terminate. :slight_smile:

2. Cheap copies using Red/Green Trees

This seems like an elegant approach to a complex problem.

3. Apply transformations from high-level to low-level
-----------------------------------------------------
Optimization should be applied from very specialized to very general
(potentially after some canonicalization). For instance, the first
step could be detecting common idioms such as gemm and replace them
with either a BLAS function call or apply well-studied optimizations
like BLIS to them. After such an idiom has been detected, no other
transformation should be applied to them.

I'm sceptical to such a machinery. People usually write bad code (me
included) and trying to mach multiple patterns to the same semantics
will be hard, considering how lenient C++ is to pointer handling and
type conversions.

If you do find a match and convert to a library function call, then
well, you can't do anything with it even if you wanted to. :slight_smile:

Mid-level transformations may try to map entire loop nests to cache-
and compute hierarchies (SIMT threads, multiprocessors, offloading,
etc) by applying transformations such as tiling, loop interchange and
array packing.

This is hard but very profitable. However, feels to me again that this
is just VPlan packed differently.

While VPlan still has no way yet to handle even simple outer-loops
(has that landed yet?), once we do, then the natural progression will
be to start understanding their semantics and possibly make high level
assumptions like that.

6. Late fallback versioning at IR regeneration
------------------------------------------
When a transformation is applied, it can attach conditions (no
aliasing, no integer wrap, value restrictions, etc.) under which the
transformation is valid. During LLVM-IR generation, these conditions
are collected and emitted as run-time conditions. If the condition
fails, the original code is executed.

This sounds like it will bloat code for a lot of cold cases. Or worse,
get it wrong, and put hot code in the cold path.

7. Standardized API for analyses with multiple implementations

These are good to have regardless of which vectorisation strategy we use.

8. Abstract representation of statements
----------------------------------------
For instance, assuming that %add is not used a second time, in
the example below

    %add = add i64 %arg, 2
    %mul = shl i64 %add, 1

the two instructions should always be computed together in the same
loop iteration.

This may inhibit further combines, or even detection of target
specific patterns for SIMD code that aren't common.

I agree that not forcefully binding statements with instructions is a
good idea, but this may need a target-specific pattern matcher to be
more sensitive to target idiosyncrasies.

9. Expansion of use-def chains to arrays when spanning loops
------------------------------------------------------------
The transforming pass has to consider this during its profitability
model. The big advantage is that in terms of correctness, use-def
chains do not manifest false dependencies.

Sounds good, but also creates the problem of how to handle the array.
If 'n' is unknown, or dependent on SIMD widths or number of threads,
it's too low level to add anything that is guaranteed to not change
the performance profile of the original loop.

Q: Relation to the new/legacy pass manager?
A: LLVM's pass managers are unfortunately not designed to apply to
subtrees nor persistent data structures besides the LLVM-IR itself.

By design. The more alternative persistent data structures you have
being modified by a series of passes, the harder it is to know what
did what and where you are.

Instead, the loop tree optimizer would be its own monolithic pass on
the pass manager level (like MachinePassManager and VPlan). My idea is
to add it somewhere before LoopVectorize, but after the inliner,
potentially replace most other loop transformations.

To me this almost sounds like Polly. Take LLVM IR into a completely
different representation, do a bunch of transformations with it,
re-generate LLVM IR and spits it back into the pipeline.

By that time, all analyses have to be invalidated. All
canonicalisations that had been done will probably be destroyed and
many current pattern matches will stop working. This infrastructure is
only meaningful at the function level or higher, so the potential for
wide range destruction is not trivial.

Don't get me wrong, I like the idea, it's a cool experiment using some
cool data structures and algorithms. But previous experiences with the
pass manager have, well, not gone smooth in any shape or form.

Q: Relation to LoopVectorize/VPlan?
A: VPlan has similar design goals [9] but is intended for
vectorization only.

Again, by a conservative design. I think adding yet-another won't help.

My point is: if this is the way to go, then we should start to think
how we make everything that makes sense become part of this scheme.
Splitting the pass manager into SSA and Tree, run some passes in one
others in the other, and so on.

But creating multiple, incompatible and potentially destructive whole
new pass managers will make a hard job impossible.

However, it lacks cheap copies. Instead
of instructions, it uses recipes/"meta-instructions" that handle what
happens to instructions after vectorization, e.g. do that operation on
each vector lane ("WIDEN").

Nothing stops us from implementing a leaner approach to VPlan. It
wouldn't be a trivial implementation, but the volume of work that
would be required in this proposal is staggering, too.

VPlan is more oriented towards modifying
instructions instead of statements as collection of instructions.

Fair enough, the design was to enhance SIMD code generation, not any
kind of parallel semantics. I guess it would be possible to add the
concept of higher level blocks to VPlan.

All in all, VPlan is young and in constant refactoring, and perhaps it
would be more productive to move it towards a more inclusive approach
than throwing it away before it fully matures to start a whole new
project.

Q: Relation to MLIR?
A: MLIR is more similar to LLVM-IR than a loop hierarchy. For
instance, it also does not feature cheap copies.

If you treat MLIR as your red tree, you could create a green tree
(perhaps as a dialect) and have cheap copies (passing the dialects and
deltas without passing the base).

An advantage is that
loops and multi-dimensional arrays can be represented in the language
without the need of being rediscovered, but have to be inserted by a
front-end.

Not necessarily. We have discussed introducing dialect annotation to
MLIR during compile time from analysis passes that would basically do
what the front-end should have done.

Conclusions?

This was a long email, with too many proposals, so I don't have any
strong opinion or conclusions, not even from my own comments.

Overall, I like a lot of the ideas (red/green, tree optimisation,
different search strategy), but I dislike the encompassing proposal to
*replace* a lot of the existing infrastructure.

For better or worse, LLVM is a product of its age. Some things could
have been done better, but we have always adopted the "general
consensus and slow movement" way to change things. Sometimes too slow,
but...

Now, MLIR can be a way to speed that up.

It is a much more malleable format than LLVM IR, it was designed for
high-level representation, has a lot of parallelism concepts in it and
it's supposed to interact with LLVM IR seamlessly.

It may be much easier to use MLIR to interoperate the two "pass
managers" _together_ than converting from one to the other and vice
versa.

This is a bold claim and I have no evidence it could ever work. But I
think it would still be less work than creating yet another pass
manager from scratch.

cheers,
--renato

> 1. Candidate selection through cost function
> --------------------------------------------
> Instead of needing to know which transformation is profitable before
> applying it, create a copy of the data structure, modify it, and
> compare it to the original. Moreover, holding multiple, differently
> optimized, copies allows evaluating each variant using a cost function
> and select the 'best' when re-generating LLVM-IR again (or re-use the
> original LLVM-IR).

This sounds a lot like VPlan.

Yes, as mentioned in the Q&A. Unfortunately VPlan is able to represent
arbitrary code not has cheap copies.

> Instantiating every possible sequence of transformations of course is
> not feasible, so the search space needs to be pruned. This could be
> made dependent on the optimization level.

Are you planning on using heuristic searches? This could make the
vectoriser unstable upon small input changes and therefore hard to get
consistent results and testing.

I'm not against such idea, but I think we need to be conservative in
such a core component of the compiler.

It would be nice to have -Ogocrazy to mean "keep going until you find
something", but usually, -O3 should terminate. :slight_smile:

I agree, as outlined in the RFC under "predefined optimization levels".

> 3. Apply transformations from high-level to low-level
> -----------------------------------------------------
> Optimization should be applied from very specialized to very general
> (potentially after some canonicalization). For instance, the first
> step could be detecting common idioms such as gemm and replace them
> with either a BLAS function call or apply well-studied optimizations
> like BLIS to them. After such an idiom has been detected, no other
> transformation should be applied to them.

I'm sceptical to such a machinery. People usually write bad code (me
included) and trying to mach multiple patterns to the same semantics
will be hard, considering how lenient C++ is to pointer handling and
type conversions.

This conversion is a possibility and certainly not the main motivation
for a loop hierarchy. Smaller idioms exists as well, such as detecting
popcount. Even with gemm I think it would be nice if it could be
written in a naive version in the source code that compiles with any
compiler, but also benefit from the target platform's hand-optimized
performance primitives by adding a compiler switch (which could be
-O3).

> Mid-level transformations may try to map entire loop nests to cache-
> and compute hierarchies (SIMT threads, multiprocessors, offloading,
> etc) by applying transformations such as tiling, loop interchange and
> array packing.

This is hard but very profitable. However, feels to me again that this
is just VPlan packed differently.

While VPlan still has no way yet to handle even simple outer-loops
(has that landed yet?), once we do, then the natural progression will
be to start understanding their semantics and possibly make high level
assumptions like that.

I wouldn't have thought that parallelization and offloading was ever
considered on top of VPlan.

> 6. Late fallback versioning at IR regeneration
> ------------------------------------------
> When a transformation is applied, it can attach conditions (no
> aliasing, no integer wrap, value restrictions, etc.) under which the
> transformation is valid. During LLVM-IR generation, these conditions
> are collected and emitted as run-time conditions. If the condition
> fails, the original code is executed.

This sounds like it will bloat code for a lot of cold cases. Or worse,
get it wrong, and put hot code in the cold path.

Are you arguing against code versioning? It is already done today by
multiple passes such as LoopVersioningLICM, LoopDistribute,
LoopUnrollAndJam and LoopVectorize. The proposal explicitly tries to
avoid code bloat by having just one fallback copy. Runtime conditions
can be chosen more or less optimistically, but I don't see how this
should be an argument for all kinds of versioning.

If you are concerned about bloat in cold paths, we could use profile
information to optimize cold functions with '-Os', something that GCC
does, but not Clang.

> 7. Standardized API for analyses with multiple implementations

These are good to have regardless of which vectorisation strategy we use.

In LLVM, AliasAnalysis does this, but hat not yet found another application.

> 8. Abstract representation of statements
> ----------------------------------------
> For instance, assuming that %add is not used a second time, in
> the example below
>
> %add = add i64 %arg, 2
> %mul = shl i64 %add, 1
>
> the two instructions should always be computed together in the same
> loop iteration.

This may inhibit further combines, or even detection of target
specific patterns for SIMD code that aren't common.

I agree that not forcefully binding statements with instructions is a
good idea, but this may need a target-specific pattern matcher to be
more sensitive to target idiosyncrasies.

My idea here is that loop-level optimizations rarely need to know
which target-specific instructions are executed, as long as it knows
its performance-relevant properties. This might be a difference to
vectorization which may be more ISA-specific.

> 9. Expansion of use-def chains to arrays when spanning loops
> ------------------------------------------------------------
> The transforming pass has to consider this during its profitability
> model. The big advantage is that in terms of correctness, use-def
> chains do not manifest false dependencies.

Sounds good, but also creates the problem of how to handle the array.
If 'n' is unknown, or dependent on SIMD widths or number of threads,
it's too low level to add anything that is guaranteed to not change
the performance profile of the original loop.

As mentioned, the profitability model has to take this into account.
Conservatively, we may only do this if the resulting array is a small
constant size such that we can assume that even multiple of those fir
on the stack.

> Q: Relation to the new/legacy pass manager?
> A: LLVM's pass managers are unfortunately not designed to apply to
> subtrees nor persistent data structures besides the LLVM-IR itself.

By design. The more alternative persistent data structures you have
being modified by a series of passes, the harder it is to know what
did what and where you are.

The proposal avoids persistent data structures between separate passes.

Note that MachineFunctionPass maintains the MachineFunction data
structure in parallel to the LLVM-IR.

> Instead, the loop tree optimizer would be its own monolithic pass on
> the pass manager level (like MachinePassManager and VPlan). My idea is
> to add it somewhere before LoopVectorize, but after the inliner,
> potentially replace most other loop transformations.

To me this almost sounds like Polly. Take LLVM IR into a completely
different representation, do a bunch of transformations with it,
re-generate LLVM IR and spits it back into the pipeline.

There is indeed an inspiration from Polly.

By that time, all analyses have to be invalidated. All
canonicalisations that had been done will probably be destroyed and
many current pattern matches will stop working. This infrastructure is
only meaningful at the function level or higher, so the potential for
wide range destruction is not trivial.

Don't get me wrong, I like the idea, it's a cool experiment using some
cool data structures and algorithms. But previous experiences with the
pass manager have, well, not gone smooth in any shape or form.

What experiments? I don't see a problem if the pass manger has to
invalidate analysis are re-run canonicalization passes. This happens
many times in the default pass pipelines. In addition, this
invalidation is only necessary if the loop optimization pass optimizes
something, in which case the additional cost should be justified.

> Q: Relation to LoopVectorize/VPlan?
> A: VPlan has similar design goals [9] but is intended for
> vectorization only.

Again, by a conservative design. I think adding yet-another won't help.

My point is: if this is the way to go, then we should start to think
how we make everything that makes sense become part of this scheme.
Splitting the pass manager into SSA and Tree, run some passes in one
others in the other, and so on.

But creating multiple, incompatible and potentially destructive whole
new pass managers will make a hard job impossible.

I don't think the proposal qualifies as including a full-flexible new
pass manger, at least no more than the current mechanism LoopVectorize
uses to run passes on VPlan (LoopVectorizationPlanner::plan).

> However, it lacks cheap copies. Instead
> of instructions, it uses recipes/"meta-instructions" that handle what
> happens to instructions after vectorization, e.g. do that operation on
> each vector lane ("WIDEN").

Nothing stops us from implementing a leaner approach to VPlan. It
wouldn't be a trivial implementation, but the volume of work that
would be required in this proposal is staggering, too.

> VPlan is more oriented towards modifying
> instructions instead of statements as collection of instructions.

Fair enough, the design was to enhance SIMD code generation, not any
kind of parallel semantics. I guess it would be possible to add the
concept of higher level blocks to VPlan.

All in all, VPlan is young and in constant refactoring, and perhaps it
would be more productive to move it towards a more inclusive approach
than throwing it away before it fully matures to start a whole new
project.

While I still think the goals of VPlan and a loop hierarchy are
different, I expect VPlan to be production-ready earlier than this
proposal. I fear that combining them would delay the both.

xkcd: Standards

While I can never find this xkcd not funny, a the loop hierarchy is
not intended to be universal.

> Q: Relation to MLIR?
> A: MLIR is more similar to LLVM-IR than a loop hierarchy. For
> instance, it also does not feature cheap copies.

If you treat MLIR as your red tree, you could create a green tree
(perhaps as a dialect) and have cheap copies (passing the dialects and
deltas without passing the base).

I don't see how this could work.

> An advantage is that
> loops and multi-dimensional arrays can be represented in the language
> without the need of being rediscovered, but have to be inserted by a
> front-end.

Not necessarily. We have discussed introducing dialect annotation to
MLIR during compile time from analysis passes that would basically do
what the front-end should have done.

The argument is that MLIR has first-class expressions for
multi-dimensional array accesses ("MemRef") while LLVM-IR does not.

https://lists.llvm.org/pipermail/llvm-dev/2019-July/134063.html

Both of them can have analyses to raise the abstraction to a
multi-dimensional access ("delinearization").

Conclusions?

This was a long email, with too many proposals, so I don't have any
strong opinion or conclusions, not even from my own comments.

Thank you for going through it!

Overall, I like a lot of the ideas (red/green, tree optimization,
different search strategy), but I dislike the encompassing proposal to
*replace* a lot of the existing infrastructure.

Not a replacement, but an addition that does not always need to be
enabled (e.g. -O0).

In a previous RFC [8] I tried to NOT introduce a data structure but to
re-use LLVM-IR. The only discussion there was about the RFC, was about
not to 'abuse' the LLVM-IR.

https://lists.llvm.org/pipermail/llvm-dev/2017-October/118169.html
https://lists.llvm.org/pipermail/llvm-dev/2017-October/118258.html

I definitely see the merits of using fewer data structures, but it is
also hard to re-use something existing for a different purpose (in
this case: VPlan) without making both more complex.

For better or worse, LLVM is a product of its age. Some things could
have been done better, but we have always adopted the "general
consensus and slow movement" way to change things. Sometimes too slow,
but...

Now, MLIR can be a way to speed that up.

It is a much more malleable format than LLVM IR, it was designed for
high-level representation, has a lot of parallelism concepts in it and
it's supposed to interact with LLVM IR seamlessly.

It may be much easier to use MLIR to interoperate the two "pass
managers" _together_ than converting from one to the other and vice
versa.

This is a bold claim and I have no evidence it could ever work. But I
think it would still be less work than creating yet another pass
manager from scratch.

This is why I don't want the framework to be too tangled with LLVM-IR.
For the foreseeable future, Clang will generate LLVM-IR, but our
motivation is to (also) optimize C/C++ code. That is, I do not see a
way to not (also) handle LLVM-IR until Clang is changed to generate
MLIR (which then again will be another data struture in the system).

Michael

Yes, as mentioned in the Q&A. Unfortunately VPlan is able to represent
arbitrary code not has cheap copies.

Orthogonal, but we should also be looking into implementing the cheap
copies in VPlan if we want to search for composable plans.

This conversion is a possibility and certainly not the main motivation
for a loop hierarchy.

I know. There are many things that can be done with what you propose,
but we should focus on what's the main motivation.

From what I can tell, the tree representation is a concrete proposal

for the many year discussion about parallel IR.

The short paper doesn't mention that, nor it discusses other
opportunities to fix pipeline complexity (that is inherent of any
compiler).

I still believe that many of the techniques you propose are meaningful
ways to solve them, but creating another IR will invariably create
some adoption barriers.

Especially when we already have VPlan and MLIR converging now, which
will need to find their own spaces, too.

I wouldn't have thought that parallelization and offloading was ever
considered on top of VPlan.

I don't see why not. VPlan is a structure for picking a path through
composable transformations.

While so far it's being mainly focused at replacing the monolithic
vectorisation, there are concrete plans to look at composition and
more complex idioms.

Are you arguing against code versioning? It is already done today by
multiple passes such as LoopVersioningLICM, LoopDistribute,
LoopUnrollAndJam and LoopVectorize. The proposal explicitly tries to
avoid code bloat by having just one fallback copy. Runtime conditions
can be chosen more or less optimistically, but I don't see how this
should be an argument for all kinds of versioning.

No. I'm cautious to the combination of heuristics search and
versioning, especially when the conditions are runtime based. It may
be hard to CSE them later.

The paths found may not be the most optimal in terms of intermediate states.

> Don't get me wrong, I like the idea, it's a cool experiment using some
> cool data structures and algorithms. But previous experiences with the
> pass manager have, well, not gone smooth in any shape or form.

What experiments? I don't see a problem if the pass manger has to
invalidate analysis are re-run canonicalization passes. This happens
many times in the default pass pipelines. In addition, this
invalidation is only necessary if the loop optimization pass optimizes
something, in which case the additional cost should be justified.

My point goes back to doing that in VPlan, then tree. The more
back-and-forth IR transformations we add to the pipeline, the more
brittle it will be.

The original email also proposes, for the future, to do all sorts of
analyses and transformations in the tree representation, and that will
likely be incompatible with (or at least not propagated through) the
conversions.

I don't think the proposal qualifies as including a full-flexible new
pass manger, at least no more than the current mechanism LoopVectorize
uses to run passes on VPlan (LoopVectorizationPlanner::plan).

Sorry, that came out stronger than it should have been. I agree it's
not a "whole new pass manager".

While I still think the goals of VPlan and a loop hierarchy are
different, I expect VPlan to be production-ready earlier than this
proposal. I fear that combining them would delay the both.

I get it, but I fear taking a completely different approach may make
it harder to get your proposal to show benefits any time soon.

> xkcd: Standards

While I can never find this xkcd not funny, a the loop hierarchy is
not intended to be universal.

Sorry, poetic license. :slight_smile:

I tried to reflect the perils of creating too many, sometimes competing, IRs.

In a previous RFC [8] I tried to NOT introduce a data structure but to
re-use LLVM-IR. The only discussion there was about the RFC, was about
not to 'abuse' the LLVM-IR.

[llvm-dev] [RFC] Polly Status and Integration
[llvm-dev] [RFC] Polly Status and Integration

I definitely see the merits of using fewer data structures, but it is
also hard to re-use something existing for a different purpose (in
this case: VPlan) without making both more complex.

My point about avoiding more structures and IRs was related to VPlan
and MLIR, not LLVM-IR.

I agree there should be an abstraction layer to do parallelisation
analysis, but we already have two, and I'd rather add many of your
good proposals on those than create a third.

Perhaps it's not clear how we could do that now, but we should at
least try to weigh the options.

I'd seriously look at adding a tree-like annotation as an MLIR
dialect, and use it for lean copies.

For the foreseeable future, Clang will generate LLVM-IR, but our
motivation is to (also) optimize C/C++ code. That is, I do not see a
way to not (also) handle LLVM-IR until Clang is changed to generate
MLIR (which then again will be another data struture in the system).

Even if/when Clang generates MLIR, there's no guarantee the high-level
dialects will be preserved until the vectorisation pass. And other
front-ends may not generate the same quality of annotations.

We may have to re-generate what we need anyway, so no point in waiting
all the front-ends to do what we need as well as all the previous
passes to guarantee to keep it.

cheers,
--renato

In the 2018 LLVM DevMtg [1], I presented some shortcomings of how LLVM
optimizes loops. In summary, the biggest issues are (a) the complexity
of writing a new loop optimization pass (including needing to deal
with a variety of low-level issues, a significant amount of required
boilerplate, the difficulty of analysis preservation, etc.), (b)
independent optimization heuristics and a fixed pass ordering and (c)
code explosion due to versioning. Currently, different people are
working on improving different loop optimization passes such as
LoopInterchange, LoopFuse and LoopVectorize. Even if LLVM had
‘perfect’ individual loop passes, they would still have the
aforementioned problems due to still being independent and the system
as a whole will be suboptimal.

Hi Michael,

Thank you for bringing this up. This is an area of interest, and I certainly share you view of what a pain this all is right now. I can tell you’ve put a lot of thought into this and time into your RFC!

The central idea is to use a modifiable loop tree – similar to
LoopInfo – as the primary representation. LLVM-IR is converted to a
loop tree, then optimized and finally LLVM-IR is generated again for
subtrees that are considered profitable. This is not a new concept, it
has already been used in compilers such as IBM XL Fortran (called ASTI
[4]) and Silicon Graphics/Open64 (called LNO [10]), and in research
such as the Value State Dependence Graph [3].

Ignoring the details of its representation, this is also conceptually how Polly works: code is lifted into its representation, transformed, then lowered back down.

  1. Represents loops and predicates instead of CFGs

Yes, totally!

Overall, I think that this discussion would be easier to process if we broke it into a few pieces. There seems to be consensus that LLVM IR (as is) is not the right representation for aggressive loop transformations. If we don’t have consensus on this, then I’d make sure to start there.

Once that is established, there is a question of “what is the right representation to use”? This question has two subcomponents: what data structure should we use, and what is the IR within it.

If you propose introducing a brand new data structure, please expect me to push back on that pretty hard. This is a perfect application of MLIR, and using it provides a lot of value (including amazing testing tools, round-tripping, location tracking, etc) which would otherwise would have to be reinvented, and doesn’t not dictate the IR structure otherwise. MLIR absolutely supports nested loop structures etc, as is seen in the affine dialect.

The MLIR community also is highly invested in HPC-style transformations on this, and a lot of thought has gone into it. You can learn more about this in the slides and videos from the MLIR open design meetings.

One you achieve consensus on data structure, there is the question of what IR to use within it. I would recommend starting with some combination of “existing LLVM IR operations + high level control flow representation”, e.g. parallel and affine loops. The key here is that you need to always be able to lower in a simple and predictable way to LLVM IR (this is one of the things that classic polyhedral systems did sub optimally, making it difficult to reason about the cost model of various transformations), and this is a natural incremental starting point anyway. Over time, more high level concepts can be gradually introduced. FYI, MLIR already has a reasonable LLVM dialect and can generate LLVM IR from it, so we’d just need an “LLVM IR → MLIR LLVM dialect” conversion, which should be straightforward to build.

Once you have the data structure and the dialect within it decided, you have the set of transformations. Again, you’ve given a lot of thought to this, and that all sounds great to me, but the priorities can be driven by whoever wants to contribute and what concrete problems they have to solve.

Once the infra for “raising to this representation and lowering back down” is figured out, we can open the box of having clang and other front ends directly generate it.

Q: Relation to MLIR?

A: MLIR is more similar to LLVM-IR than a loop hierarchy.

This is not true, MLIR is great for dialects that want to model loop hierarchies naturally, this is a major focus of the affine dialect (e.g. see affine.for on that page). MLIR is not limited to affine loops, that is just a choice made by the affine dialect - the loop dialect has more general constructs that are less developed.

For
instance, it also does not feature cheap copies.

I’m not sure what this means.

An advantage is that
loops and multi-dimensional arrays can be represented in the language
without the need of being rediscovered, but have to be inserted by a
front-end.

This is correct, but I don’t see how this helps if your focus is raising code that has already been lowered to LLVM IR, e.g. by Clang or some other frontend that generates LLVM IR today.

That is, if Clang was generating MLIR, loops and arrays
still have to be rediscovered.

This isn’t true, it would be perfectly sensible to lower C control flow structures directly too LLVM. The primary concern are things like unstructured switches (think duff’s device) and unstructured gotos, but these occur rarely: they need to be handled correctly, but can do so with a traditionally lowered CFG and other “best effort” attempts to raise them.

Other frontends like Swift and Flang could also generate this directly if they chose to, getting the benefits of progressive lowering.

However, a loop hierarchy optimizer
could be applied to MLIR just as well as to LLVM-IR.

Right! In addition to structured control flow, MLIR has great support for CFG representations like LLVM of course. :slight_smile:

-Chris

Overall, I think that this discussion would be easier to process if we broke it into a few pieces. There seems to be consensus that LLVM IR (as is) is not the right representation for aggressive loop transformations. If we don’t have consensus on this, then I’d make sure to start there.

From all meetings we had about parallelism representations in IR over

the past few dev meetings, this was pretty much the only agreement. :slight_smile:

We couldn't find a clear way to represent most parallel concepts in IR
(without breaking the others), but there wasn't a single format on top
of LLVM IR that we could all agree on.

If you propose introducing a brand new data structure, please expect me to push back on that pretty hard. This is a perfect application of MLIR, and using it provides a lot of value (including amazing testing tools, round-tripping, location tracking, etc) which would otherwise would have to be reinvented, and doesn’t not dictate the IR structure otherwise. MLIR absolutely supports nested loop structures etc, as is seen in the affine dialect.

Today, I believe MLIR can serve that purpose, with (possibly)
overlapping dialects, and have cheap copies.

Cheap copies are required for heuristic searches, which have at least
polynomial compute/memory cost, but sometimes exponential.

Copying the whole module N^2 times (N = some pre-defined large budget)
won't work, that's why overlays (like the red/green tree idea) are
needed.

Journaling and snap-shooting are also possible ways to make memory
almost constant (and not exploding compute), and that should be
possible with MLIR dialects, I think.

Once you have the data structure and the dialect within it decided, you have the set of transformations. Again, you’ve given a lot of thought to this, and that all sounds great to me, but the priorities can be driven by whoever wants to contribute and what concrete problems they have to solve.

If we don't have a generic enough representation, early designs will
change how later ones will be implemented.

That's why I think MLIR would be the right choice, as it's much more
flexible and composable.

cheers,
--renato

Yes, as mentioned in the Q&A. Unfortunately VPlan is able to represent
arbitrary code not has cheap copies.

Orthogonal, but we should also be looking into implementing the cheap
copies in VPlan if we want to search for composable plans.

VPlan structures have many references to neighboring structures such as parents and use-def chains. This makes adding cheap copies as an afterthought really hard.

This conversion is a possibility and certainly not the main motivation
for a loop hierarchy.

I know. There are many things that can be done with what you propose,
but we should focus on what’s the main motivation.

From what I can tell, the tree representation is a concrete proposal
for the many year discussion about parallel IR.

As I recall, the Parallel IR approaches were trying to add parallel constructs to the existing LLVM-IR. This added the issue that the current infrastructure suddenly need to handle those as well, becoming a major problem for adoption.

The short paper doesn’t mention that, nor it discusses other
opportunities to fix pipeline complexity (that is inherent of any
compiler).

I still believe that many of the techniques you propose are meaningful
ways to solve them, but creating another IR will invariably create
some adoption barriers.

I see it as an advantage in respect of adoption: It can be switched on and off without affecting other parts.

Are you arguing against code versioning? It is already done today by
multiple passes such as LoopVersioningLICM, LoopDistribute,
LoopUnrollAndJam and LoopVectorize. The proposal explicitly tries to
avoid code bloat by having just one fallback copy. Runtime conditions
can be chosen more or less optimistically, but I don’t see how this
should be an argument for all kinds of versioning.

No. I’m cautious to the combination of heuristics search and
versioning, especially when the conditions are runtime based. It may
be hard to CSE them later.

The paths found may not be the most optimal in terms of intermediate states.

Versioning is always a trade-off between how likely the preconditions apply and code size (and maybe how expensive the runtime checks are). IMHO this concern is separate from how code versioning is implemented.

Don’t get me wrong, I like the idea, it’s a cool experiment using some
cool data structures and algorithms. But previous experiences with the
pass manager have, well, not gone smooth in any shape or form.

What experiments? I don’t see a problem if the pass manger has to
invalidate analysis are re-run canonicalization passes. This happens
many times in the default pass pipelines. In addition, this
invalidation is only necessary if the loop optimization pass optimizes
something, in which case the additional cost should be justified.

My point goes back to doing that in VPlan, then tree. The more
back-and-forth IR transformations we add to the pipeline, the more
brittle it will be.

Agreed, but IMHO this is the price to pay for better loop optimizations.

The original email also proposes, for the future, to do all sorts of
analyses and transformations in the tree representation, and that will
likely be incompatible with (or at least not propagated through) the
conversions.

Correct, but I’d argue these are different kinds of analyses not necessarily even useful for different representations. MLIR also has its set of analyses separate to those on MLIR.

In a previous RFC [8] I tried to NOT introduce a data structure but to
re-use LLVM-IR. The only discussion there was about the RFC, was about
not to ‘abuse’ the LLVM-IR.

https://lists.llvm.org/pipermail/llvm-dev/2017-October/118169.html
https://lists.llvm.org/pipermail/llvm-dev/2017-October/118258.html

I definitely see the merits of using fewer data structures, but it is
also hard to re-use something existing for a different purpose (in
this case: VPlan) without making both more complex.

My point about avoiding more structures and IRs was related to VPlan
and MLIR, not LLVM-IR.

I agree there should be an abstraction layer to do parallelisation
analysis, but we already have two, and I’d rather add many of your
good proposals on those than create a third.

Perhaps it’s not clear how we could do that now, but we should at
least try to weigh the options.

I’d seriously look at adding a tree-like annotation as an MLIR
dialect, and use it for lean copies.

Like VPlan, MLIR is a representation with many references between objects from different levels. I do not see how to add cheap copies as an afterthought.

For the foreseeable future, Clang will generate LLVM-IR, but our
motivation is to (also) optimize C/C++ code. That is, I do not see a
way to not (also) handle LLVM-IR until Clang is changed to generate
MLIR (which then again will be another data struture in the system).

Even if/when Clang generates MLIR, there’s no guarantee the high-level
dialects will be preserved until the vectorisation pass.

I’d put loop optimizations earlier into the pipeline than vectorization. Where exactly is a phase ordering problem. I’d want to at least preserve multi-dimensional subscripts. Fortunately MemRef is a core MLIR construct and unlikely to be lowered before lowering to another representation (likely LLVM-IR).

And other
front-ends may not generate the same quality of annotations.
We may have to re-generate what we need anyway, so no point in waiting
all the front-ends to do what we need as well as all the previous
passes to guarantee to keep it.

I don’t see how this is relevant for a Clang-based pipeline. Other languages likely need a different pipeline than one intended for C/C++ code.

There are not a lot of high-level semantics required to be preserved to build a loop hierarchy.

Thanks for the productive discussion,
Michael

The central idea is to use a modifiable loop tree – similar to
LoopInfo – as the primary representation. LLVM-IR is converted to a
loop tree, then optimized and finally LLVM-IR is generated again for
subtrees that are considered profitable. This is not a new concept, it
has already been used in compilers such as IBM XL Fortran (called ASTI
[4]) and Silicon Graphics/Open64 (called LNO [10]), and in research
such as the Value State Dependence Graph [3].

Ignoring the details of its representation, this is also conceptually how Polly works: code is lifted into its representation, transformed, then lowered back down.

Indeed I tried to improve on Polly’s internal representation, and improve on the issue that Polly can only represent a subset of LLVM-IR code.

Overall, I think that this discussion would be easier to process if we broke it into a few pieces. There seems to be consensus that LLVM IR (as is) is not the right representation for aggressive loop transformations. If we don’t have consensus on this, then I’d make sure to start there.

Once that is established, there is a question of “what is the right representation to use”? This question has two subcomponents: what data structure should we use, and what is the IR within it.

If you propose introducing a brand new data structure, please expect me to push back on that pretty hard.

Which I think is a good thing since I also do not want too many data structures being more-or-less well maintained. But I also think there is a good argument to a loop-centric data structure.

This is a perfect application of MLIR, and using it provides a lot of value (including amazing testing tools, round-tripping, location tracking, etc) which would otherwise would have to be reinvented, and doesn’t not dictate the IR structure otherwise. MLIR absolutely supports nested loop structures etc, as is seen in the affine dialect.

The MLIR community also is highly invested in HPC-style transformations on this, and a lot of thought has gone into it. You can learn more about this in the slides and videos from the MLIR open design meetings.

I have been following the development of MLIR.

One you achieve consensus on data structure, there is the question of what IR to use within it. I would recommend starting with some combination of “existing LLVM IR operations + high level control flow representation”, e.g. parallel and affine loops. The key here is that you need to always be able to lower in a simple and predictable way to LLVM IR (this is one of the things that classic polyhedral systems did sub optimally, making it difficult to reason about the cost model of various transformations), and this is a natural incremental starting point anyway. Over time, more high level concepts can be gradually introduced. FYI, MLIR already has a reasonable LLVM dialect and can generate LLVM IR from it, so we’d just need an “LLVM IR → MLIR LLVM dialect” conversion, which should be straightforward to build.

Adding a LLVM-IR → MLIR → LLVM-IR round-trip would at the beginning just introduce compile-time overhead and what Renato described as brittleness. I fear this hurts adaption.

Once you have the data structure and the dialect within it decided, you have the set of transformations. Again, you’ve given a lot of thought to this, and that all sounds great to me, but the priorities can be driven by whoever wants to contribute and what concrete problems they have to solve.

Once the infra for “raising to this representation and lowering back down” is figured out, we can open the box of having clang and other front ends directly generate it.

This suggestions would also apply to VPlan. Ignoring that work on VPlan started before MLIR, would you have suggested to implement VPlan on MLIR as well? Would you maybe even advise to retarget VPlan on MLIR now?

Q: Relation to MLIR?

A: MLIR is more similar to LLVM-IR than a loop hierarchy.

This is not true, MLIR is great for dialects that want to model loop hierarchies naturally, this is a major focus of the affine dialect (e.g. see affine.for on that page). MLIR is not limited to affine loops, that is just a choice made by the affine dialect - the loop dialect has more general constructs that are less developed.

This is definitely subjective question. I think that MLIR is closer to LLVM-IR for how it is processed. Both have a sequence of passes running over a single source of truth. Both allow walking the entire structure from every instruction/operation/block. Analyses are on function or module level. Both have CFGs (I think for a certain kind of transformations it is an advantage that control flow is handled implicitly).

For
instance, it also does not feature cheap copies.

I’m not sure what this means.

The possibility to make local changes speculatively without copying the entire data structure. IMHO this is a central idea that allows applying a transformations speculatively to pass it to a legality check and cost heuristic without committing to apply it. As a consequence, passes do not need to implement to implement these in a transformation-specific manner, drastically reducing the burden of implementation.

For instance, more loop transformations are feasible if instructions are moved into the innermost loops. With speculative transformations, we can canonicalize the representation to sink computations into loops – the opposite of what LICM does – and then see whether a transformation can applied. If not, the speculative representation is discarded without having an effect on the original representation (and not needing to hoist those computations again).

Because the MLIR classes have many references to related objects (such as pointer to parents and use-def chains), I don’t think it is feasible to implement on top of MLIR.

An advantage is that
loops and multi-dimensional arrays can be represented in the language
without the need of being rediscovered, but have to be inserted by a
front-end.

This is correct, but I don’t see how this helps if your focus is raising code that has already been lowered to LLVM IR, e.g. by Clang or some other frontend that generates LLVM IR today.

Indeed, I would hope that LLVM-IR can preserve multi-dimensional array accesses in some fashion as well (https://lists.llvm.org/pipermail/llvm-dev/2019-July/134063.html). However, currently MLIR has the advantage of being able represent it.

That is, if Clang was generating MLIR, loops and arrays
still have to be rediscovered.

This isn’t true, it would be perfectly sensible to lower C control flow structures directly too LLVM. The primary concern are things like unstructured switches (think duff’s device) and unstructured gotos, but these occur rarely: they need to be handled correctly, but can do so with a traditionally lowered CFG and other “best effort” attempts to raise them.

Moreover, syntactical loop structures are also not a reliable indicator that there is a loop. Often enough, do,for and while are used for syntactical reasons (do { } while(0)). Yes, you could eliminate them if a non-loop is detected, but handling break, continue, etc correctly is a lot of effort. Another case are corountines that are lowered with gotos into loops, unless you think loop optimizers should handle coroutines directly.

On the other side, natural loop detection on CFGs is quite mature (with a remaining issue of irreducible loops that might appear, but can also be eliminated again). As a plus, optimization does depend less on how the source code is written.

Thanks for the productive discussion,
Michael

As I recall, the Parallel IR approaches were trying to add parallel constructs to the existing LLVM-IR. This added the issue that the current infrastructure suddenly need to handle those as well, becoming a major problem for adoption.

Yes, and that's why we could never agree on the one representation. A
completely separate one solves that problem, but introduces another,
itself.

I see it as an advantage in respect of adoption: It can be switched on and off without affecting other parts.

That's not necessarily true.

If we do like Polly, it is, but then the ability to reuse code is very
low and the time spent converting across is high. If we want to reuse,
then we'll invariably add behavioural dependencies and disabling the
pass may have side-effects.

Versioning is always a trade-off between how likely the preconditions apply and code size (and maybe how expensive the runtime checks are). IMHO this concern is separate from how code versioning is implemented.

Agreed.

Agreed, but IMHO this is the price to pay for better loop optimizations.

This may be true, and I can easily accept that, as long as we'll all
aware of the costs of doing so up front.

I'd put loop optimizations earlier into the pipeline than vectorization. Where exactly is a phase ordering problem. I'd want to at least preserve multi-dimensional subscripts. Fortunately MemRef is a core MLIR construct and unlikely to be lowered before lowering to another representation (likely LLVM-IR).

Many front-ends do that even before lowering to IR because of the
richer semantics of the AST, but it's also common for that to
introduce bugs down the line (don't want to name any proprietary
front-ends here).

I agree moving high-level optimisation passes up and doing so in a
high-level IR is a good idea.

I don't see how this is relevant for a Clang-based pipeline. Other languages likely need a different pipeline than one intended for C/C++ code.

Yes, but we want our passes to work for all languages and be less
dependent on how well they lower their code.

If they do it well, awesome. If not, and if we can identify patterns
in LLVM IR then there is no reason not to.

cheers,
--renato

Once that is established, there is a question of “what is the right representation to use”? This question has two subcomponents: what data structure should we use, and what is the IR within it.

If you propose introducing a brand new data structure, please expect me to push back on that pretty hard.

Which I think is a good thing since I also do not want too many data structures being more-or-less well maintained. But I also think there is a good argument to a loop-centric data structure.

Agreed, I think it is incredibly important for a first class loop optimizer to have first class structured loops, parallel loops etc.

One you achieve consensus on data structure, there is the question of what IR to use within it. I would recommend starting with some combination of “existing LLVM IR operations + high level control flow representation”, e.g. parallel and affine loops. The key here is that you need to always be able to lower in a simple and predictable way to LLVM IR (this is one of the things that classic polyhedral systems did sub optimally, making it difficult to reason about the cost model of various transformations), and this is a natural incremental starting point anyway. Over time, more high level concepts can be gradually introduced. FYI, MLIR already has a reasonable LLVM dialect and can generate LLVM IR from it, so we’d just need an “LLVM IR → MLIR LLVM dialect” conversion, which should be straightforward to build.

Adding a LLVM-IR → MLIR → LLVM-IR round-trip would at the beginning just introduce compile-time overhead and what Renato described as brittleness. I fear this hurts adaption.

Isn’t this true of any higher level IR? Unless I’m missing something big, this seems inherent to your proposal.

Once you have the data structure and the dialect within it decided, you have the set of transformations. Again, you’ve given a lot of thought to this, and that all sounds great to me, but the priorities can be driven by whoever wants to contribute and what concrete problems they have to solve.

Once the infra for “raising to this representation and lowering back down” is figured out, we can open the box of having clang and other front ends directly generate it.

This suggestions would also apply to VPlan. Ignoring that work on VPlan started before MLIR, would you have suggested to implement VPlan on MLIR as well? Would you maybe even advise to retarget VPlan on MLIR now?

I don’t know enough to say: the tradeoffs depend a lot of where VPlan is, the challenges it faces etc. I don’t know much about VPlan or the engineering priorities behind it.

Here’s an observation though: if you ignore the engineering expense, it would clearly make sense to reimplement the mid-level LLVM optimizers on top of MLIR and replace include/llvm/IR with a dialect definition in MLIR instead.

MLIR as an IR is strictly equal to or better than the LLVM IR data structures in all ways that I’m aware of. In addition to representational flexibility, MLIR allows (and provides) a multithreaded pass manager (function passes run in parallel), has a better representation of PHI nodes, allows better terminators (eliminating need for the really ugly/unfortunate landingpad, catchpad etc hacks), has a better representation for “operands that must be constants” (immarg etc), provides a better representation for location information (important for debugging optimized code and diagnostic emission from the optimizer), and better testing tools (by building on the better location info).

The additional representational flexibility would allow a much more flexible compiler design - one where you could do progressive lowering of high level loops, OpenMP, separate out ABI lowering from Clang IRGen, etc.

I’m very fond of LLVM IR obviously, but a lot has been learned in the nearly 20 years since it was designed and implemented, and MLIR was implemented with a superset of the experience that built LLVM :slight_smile:

Q: Relation to MLIR?

A: MLIR is more similar to LLVM-IR than a loop hierarchy.

This is not true, MLIR is great for dialects that want to model loop hierarchies naturally, this is a major focus of the affine dialect (e.g. see affine.for on that page). MLIR is not limited to affine loops, that is just a choice made by the affine dialect - the loop dialect has more general constructs that are less developed.

This is definitely subjective question. I think that MLIR is closer to LLVM-IR for how it is processed. Both have a sequence of passes running over a single source of truth. Both allow walking the entire structure from every instruction/operation/block. Analyses are on function or module level. Both have CFGs (I think for a certain kind of transformations it is an advantage that control flow is handled implicitly).

Right, but a frequent way that MLIR is used is without its CFG: most machine learning kernels use nests of loops and ifs, not CFGs. CFGs are exposed when those are lowered out. See some simple examples like:
https://github.com/llvm/llvm-project/blob/master/mlir/test/Transforms/affine-data-copy.mlir

For
instance, it also does not feature cheap copies.

I’m not sure what this means.

The possibility to make local changes speculatively without copying the entire data structure. IMHO this is a central idea that allows applying a transformations speculatively to pass it to a legality check and cost heuristic without committing to apply it. As a consequence, passes do not need to implement to implement these in a transformation-specific manner, drastically reducing the burden of implementation.

For instance, more loop transformations are feasible if instructions are moved into the innermost loops. With speculative transformations, we can canonicalize the representation to sink computations into loops – the opposite of what LICM does – and then see whether a transformation can applied. If not, the speculative representation is discarded without having an effect on the original representation (and not needing to hoist those computations again).

Because the MLIR classes have many references to related objects (such as pointer to parents and use-def chains), I don’t think it is feasible to implement on top of MLIR.

Ah yes, I see what you mean. One way to do that is to represent multiple options as an op with region for each option. This means you only fork the part of the IR that you’re producing variants of. I think this is the red/green tree technique you mentioned, but I’m not sure.

An advantage is that
loops and multi-dimensional arrays can be represented in the language
without the need of being rediscovered, but have to be inserted by a
front-end.

This is correct, but I don’t see how this helps if your focus is raising code that has already been lowered to LLVM IR, e.g. by Clang or some other frontend that generates LLVM IR today.

Indeed, I would hope that LLVM-IR can preserve multi-dimensional array accesses in some fashion as well (https://lists.llvm.org/pipermail/llvm-dev/2019-July/134063.html). However, currently MLIR has the advantage of being able represent it.

I don’t think LLVM IR will ever get there without a massive design change. It is possible that it will support static shaped accesses in limited ways though.

That is, if Clang was generating MLIR, loops and arrays
still have to be rediscovered.

This isn’t true, it would be perfectly sensible to lower C control flow structures directly too LLVM. The primary concern are things like unstructured switches (think duff’s device) and unstructured gotos, but these occur rarely: they need to be handled correctly, but can do so with a traditionally lowered CFG and other “best effort” attempts to raise them.

Moreover, syntactical loop structures are also not a reliable indicator that there is a loop. Often enough, do,for and while are used for syntactical reasons (do { } while(0)). Yes, you could eliminate them if a non-loop is detected, but handling break, continue, etc correctly is a lot of effort. Another case are corountines that are lowered with gotos into loops, unless you think loop optimizers should handle coroutines directly.

Yes, you’d want to canonicalize the form of course.

On the other side, natural loop detection on CFGs is quite mature (with a remaining issue of irreducible loops that might appear, but can also be eliminated again). As a plus, optimization does depend less on how the source code is written.

Yep totally. The question is whether you lose semantic information from lowering to a CFG and reconstructing back up. This can affect you when you have higher level language semantics (e.g. Fortran parallel loops, openmp or other concurrency constructs etc). This is where MLIR excels of course.

-Chris

IIUC. the green tree is a lighter version of the tree (leaner memory
footprint) but still entire tree. It's ok to lose that info because
you don't usually need that for your transformations, and you can
always go back to the red tree (via pointer in the green tree) to ask
harder questions. Managing the semantics of the two becomes
non-trivial when you start adding and replacing nodes, there's a point
where you can't go back anymore to the red tree in the same way.

What I referred to as "journalling" is what you propose here. Add
metadata to the actual graph and during the heuristic search, only
clone those. If you make sure you can append those nodes to the graph,
and guarantee that the extra nodes are composable (ie semantically
valid in any order they may be applied), then having the original
graph + any intermediate state is valid. Therefore, keeping only the
extra nodes, and copying them along to try different routes, becomes
even cheaper than a green tree.

If those extra nodes are an MLIR dialect, with defined semantics and
structured composition, then using them in an heuristics search
produces semantically valid intermediate states and lightens the
burden of proof for every little step.

> I see it as an advantage in respect of adoption: It can be switched on and off without affecting other parts.

That's not necessarily true.

If we do like Polly, it is, but then the ability to reuse code is very
low and the time spent converting across is high. If we want to reuse,
then we'll invariably add behavioural dependencies and disabling the
pass may have side-effects.

This applies literally to any pass.

I think the problem of reusability is even worse for the current loop
optimization passes. We have multiple, partially
transformation-specific dependence analyses, such LoopAccessAnalysis,
DependenceInfo, LoopInterchangeLegality, etc. Another one is currently
in the works.

xkcd: Standards actually does apply here, but I also think that
pass-specific dependence analyses do not scale.

> I'd put loop optimizations earlier into the pipeline than vectorization. Where exactly is a phase ordering problem. I'd want to at least preserve multi-dimensional subscripts. Fortunately MemRef is a core MLIR construct and unlikely to be lowered before lowering to another representation (likely LLVM-IR).

Many front-ends do that even before lowering to IR because of the
richer semantics of the AST, but it's also common for that to
introduce bugs down the line (don't want to name any proprietary
front-ends here).

This is a problem for any intermediate representation. But isn't that
also the point of MLIR? To be ably to express higher-level language
concepts in the IR as dialects? This as well might introduce bugs.

One example is the lowering of multi-dimensional arrays from Clang's
AST to LLVM-IR. We can argue whether C/C++ spec would allow
GetElementPtr to be emitted with "inrange" modifier, but for VLAs, we
cannot even express them in the IR, so we had an RFC to change that.

I don't find the argument "there might be bugs" very convincing.

> I don't see how this is relevant for a Clang-based pipeline. Other languages likely need a different pipeline than one intended for C/C++ code.

Yes, but we want our passes to work for all languages and be less
dependent on how well they lower their code.

If they do it well, awesome. If not, and if we can identify patterns
in LLVM IR then there is no reason not to.

This was relevant to the discussion that /all/ front-ends would have
to generate good-enough annotations for loop transformations. Only the
ones that do might enable loop optimization passes.

Generally, I'd try to to make it easy for other front-end to have loop
optimizations. For instance, avoid isa<LoadInst> in favor of a more
generic "mayReadFromMemory" in analysis/transformation phases.

Michael

Hi Michael-

Liked your proposal and hope that it gets implemented in MLIR. Linearized IR of LLVM is not suitable for LNO.

We have written multiple Loop Nest Optimizers (in LLVM) in past five years. We sent a talk proposal to LLVM developer meeting in 2017. It was rejected. From the review comment it looked like polly was the preferred path for Loop Nest Optimization. Hope it is not the case any more.

thanks,
-Prashanth

One you achieve consensus on data structure, there is the question of what IR to use within it. I would recommend starting with some combination of “existing LLVM IR operations + high level control flow representation”, e.g. parallel and affine loops. The key here is that you need to always be able to lower in a simple and predictable way to LLVM IR (this is one of the things that classic polyhedral systems did sub optimally, making it difficult to reason about the cost model of various transformations), and this is a natural incremental starting point anyway. Over time, more high level concepts can be gradually introduced. FYI, MLIR already has a reasonable LLVM dialect and can generate LLVM IR from it, so we’d just need an “LLVM IR → MLIR LLVM dialect” conversion, which should be straightforward to build.

Adding a LLVM-IR → MLIR → LLVM-IR round-trip would at the beginning just introduce compile-time overhead and what Renato described as brittleness. I fear this hurts adaption.

Isn’t this true of any higher level IR? Unless I’m missing something big, this seems inherent to your proposal.

No. A loop hierarchy may be created on-demand and can be skipped if, e.g., the function does not contain a loop. For IRs that are translation-unit based, the entire module will have to do a round-trip whether changed or not. To improve the situation, one could e.g. add a “has been changed” flag to each function. But it has to be added somewhere into the MLIR data structure and kept up-to-date on modifications. In a loop-hierarchical structure only the node(s) that has been changed needs to be lowered (e.g. an innermost loop) and versioned with the original IR depending on taken assumptions.

This is definitely subjective question. I think that MLIR is closer to LLVM-IR for how it is processed. Both have a sequence of passes running over a single source of truth. Both allow walking the entire structure from every instruction/operation/block. Analyses are on function or module level. Both have CFGs (I think for a certain kind of transformations it is an advantage that control flow is handled implicitly).

Right, but a frequent way that MLIR is used is without its CFG: most machine learning kernels use nests of loops and ifs, not CFGs. CFGs are exposed when those are lowered out. See some simple examples like:
https://github.com/llvm/llvm-project/blob/master/mlir/test/Transforms/affine-data-copy.mlir

I agree that a loop nest can be represented in MLIR. What is missing IMHO is being able to have multiple versions of the same code. For instance, raising emitted C++ to such representation to make it more optimizable may only be possible under preconditions and by itself making the code slower. If the raised representation cannot be optimized, we will want to use the original one.

The possibility to make local changes speculatively without copying the entire data structure. IMHO this is a central idea that allows applying a transformations speculatively to pass it to a legality check and cost heuristic without committing to apply it. As a consequence, passes do not need to implement to implement these in a transformation-specific manner, drastically reducing the burden of implementation.

For instance, more loop transformations are feasible if instructions are moved into the innermost loops. With speculative transformations, we can canonicalize the representation to sink computations into loops – the opposite of what LICM does – and then see whether a transformation can applied. If not, the speculative representation is discarded without having an effect on the original representation (and not needing to hoist those computations again).

Because the MLIR classes have many references to related objects (such as pointer to parents and use-def chains), I don’t think it is feasible to implement on top of MLIR.

Ah yes, I see what you mean. One way to do that is to represent multiple options as an op with region for each option. This means you only fork the part of the IR that you’re producing variants of. I think this is the red/green tree technique you mentioned, but I’m not sure.

The red-green tree technique even allows re-inserting entire unchanged subtrees (e.g. loop bodies after an interchange). If op takes multiple regions, each region still must be deep copies.

An advantage is that
loops and multi-dimensional arrays can be represented in the language
without the need of being rediscovered, but have to be inserted by a
front-end.

This is correct, but I don’t see how this helps if your focus is raising code that has already been lowered to LLVM IR, e.g. by Clang or some other frontend that generates LLVM IR today.

Indeed, I would hope that LLVM-IR can preserve multi-dimensional array accesses in some fashion as well (https://lists.llvm.org/pipermail/llvm-dev/2019-July/134063.html). However, currently MLIR has the advantage of being able represent it.

I don’t think LLVM IR will ever get there without a massive design change. It is possible that it will support static shaped accesses in limited ways though.

Static sized rectangular multi-dimensional arrays are already possible using a standard GetElementPtr and its inrange qualifier. For dynamic sized multi-dimensional sized arrays what is needed is to convey the dimensions of the array in form of an llvm::Value. In the RFC we discussed an intrinsic and operand bundles, neither looks like massive design changes to me.

On the other side, natural loop detection on CFGs is quite mature (with a remaining issue of irreducible loops that might appear, but can also be eliminated again). As a plus, optimization does depend less on how the source code is written.

Yep totally. The question is whether you lose semantic information from lowering to a CFG and reconstructing back up. This can affect you when you have higher level language semantics (e.g. Fortran parallel loops, openmp or other concurrency constructs etc). This is where MLIR excels of course.

Indeed it is easier to not lower these constructs, but not impossible (as shown in https://reviews.llvm.org/D69930). I think the relevant difference is that these constructs come with additional guarantees (e.g. Single-Entry-Single-Exit regions) and optimization hurdles (e.g. thread synchronization; where programmers do not expect the compiler to do a lot of things) compared to C++ loop constructs.

Michael

> > I see it as an advantage in respect of adoption: It can be switched on and off without affecting other parts.
>
> That's not necessarily true.

This applies literally to any pass.

Precisely why I don't think adding more passes is an advantage to adoption. :slight_smile:

I don't find the argument "there might be bugs" very convincing.

Sorry, it wasn't an argument, just a jest at the expense of some old
commercial front-ends.

Pass ordering is complex no matter how you slice it.

This was relevant to the discussion that /all/ front-ends would have
to generate good-enough annotations for loop transformations. Only the
ones that do might enable loop optimization passes.

This doesn't scale. You end up with a few metadata from a single
front-end justifying a huge new pass that does a few things in even
fewer cases.

VPlan is still crawling, requiring small incremental improvements,
because we're trying to replace an existing pass.

Your proposal is a new pass, that does some new things and therefore
shouldn't need to be incremental (only when the correct info is
present).

But that means now we have two loop transformation infrastructures
that could radically change the IR (and the loop metadata, if any).

Which one passes first will have the advantage, as I think we agreed,
pass ordering is not trivial.

If you restrict this new pass to only doing transformations that are
guaranteed to be valid and better (for some definition of both), then
you'll end like Polly that does wonders to a very small selection of
loops and nothing at all to most of them, just wasted time looking at
all the possibilities.

If you want to be more aggressive, then the IR will change more often
and the pass ordering problem gets worse, requiring changes in later
passes to cope with the changes.

For better or worse, we already have such a pass, the VPlan
infrastructure. It might not be perfect, but it's the path we took
years ago and I'd strongly encourage people to make it better before
throwing it away and coming up with a completely new idea.

VPlan is a simple idea, to represent loops in a more meaningful way,
to represent transformations in a composable way, and to find a
sequence of transforms that would yield the best result.

To me, your proposal is identical in its meaning, but has different
implementation details:
* Tree instead of VPobjects, with additional benefits for new cases.
* Heuristics search of transforms wih cheap state copies.

To me, it sounds like we could do both in VPlan, and even call it
LTPlan (Loop Transformation plan) to make it clear it's more generic
than just SIMD vectorisation.

More importantly, it would be a series of incremental steps towards a
better loop infrastructure, building on the work of the last many
years and likely show real benefits much sooner and with a lot less
conflicts than starting from scratch.

--renato

> Ah yes, I see what you mean. One way to do that is to represent multiple options as an op with region for each option. This means you only fork the part of the IR that you’re producing variants of. I think this is the red/green tree technique you mentioned, but I’m not sure.

IIUC. the green tree is a lighter version of the tree (leaner memory
footprint) but still entire tree. It's ok to lose that info because
you don't usually need that for your transformations, and you can
always go back to the red tree (via pointer in the green tree) to ask
harder questions.

The green tree would contain the "heaver" object as they do not need
to be copied that often. The red tree is necessary for operations
depending on the ancestry/surrounding code. As an example, looking
whether a loop is actually a subloop of an outer loop, which it could
interchanged with.

Managing the semantics of the two becomes
non-trivial when you start adding and replacing nodes, there's a point
where you can't go back anymore to the red tree in the same way.

The model is that the trees are immutable, hence no cost of managing
node replacement. Node only point to their children, their parent (red
tree) of the same code version and potentially the nodes they
originate from. When creating a new version, existing nodes are not
updated.

(to avoid unnecessary new versions, chains of transformations -- like
unroll-then-jam -- may modify existing nodes; if guaranteed that no
reference to them has been passed)
.

What I referred to as "journalling" is what you propose here. Add
metadata to the actual graph and during the heuristic search, only
clone those. If you make sure you can append those nodes to the graph,
and guarantee that the extra nodes are composable (ie semantically
valid in any order they may be applied), then having the original
graph + any intermediate state is valid. Therefore, keeping only the
extra nodes, and copying them along to try different routes, becomes
even cheaper than a green tree.

If those extra nodes are an MLIR dialect, with defined semantics and
structured composition, then using them in an heuristics search
produces semantically valid intermediate states and lightens the
burden of proof for every little step.

Journaling (I assume in the sense of filesystemes and databases: keep
a log of items before some change so the change can be rolled back) is
an interesting idea. I think it comes with drawbacks:

* Only one 'head' version is active at time. Switch to a different
version requires rolling-back one version and re-applying another. No
two versions are active at the same time.

* Every change/roll-back invalidates analyses (or analyses have to be
taught about journaling and keep previous results in memory)

* Cannot re-use subtrees (e.g. unrolling referencing the same subtree
factor-times)

* Potential abuse of the IR (in the sense
[llvm-dev] [RFC] Polly Status and Integration)

You mentioned intermediate states remaining valid, so you might have a
different kind of journalling in mind. However, due to the strong
coupling (e.g. use-def chains), I don't think this is possible. Please
elaborate more on your idea.

An analogy would be journaling and cow-btree nodes in Btrfs: Their
purposes are quite different.

Michael

We made some effort into mainstreaming Polly by integrating it into
the main repository. There were some hurdles in there, one of the
largest being that it relies on an external library written in C.
Others are that it requires well-formed IR to do anything and may
significantly increase compile time. My proposal is intended to be a
solution for these problems.

Michael

Precisely why I don't think adding more passes is an advantage to adoption. :slight_smile:

The alternative is to have a single pass for each kind of loop
transformations, i.e. many more than a single loop transformation
pass.

> I don't find the argument "there might be bugs" very convincing.

Sorry, it wasn't an argument, just a jest at the expense of some old
commercial front-ends.

Pass ordering is complex no matter how you slice it.

Indeed. I am already concerned of additional phase ordering problems
if we implement each loop transformation in its own pass, e.g. between
loop fusion and loop distribution. Do we first fuse into as few loops
as possible and then distribute, or the other way around?

> This was relevant to the discussion that /all/ front-ends would have
> to generate good-enough annotations for loop transformations. Only the
> ones that do might enable loop optimization passes.

This doesn't scale. You end up with a few metadata from a single
front-end justifying a huge new pass that does a few things in even
fewer cases.

I'd think that the metadata is not front-end/language specific.

A language where most instructions can access any memory is arguable
harder to optimize than a language where only a selected set of
instructions can do that. But the metadata describing what memory an
instruction can access is not frond-end specific.

Your proposal is a new pass, that does some new things and therefore
shouldn't need to be incremental (only when the correct info is
present).

But that means now we have two loop transformation infrastructures
that could radically change the IR (and the loop metadata, if any).

I don't think LLVM's current loop optimizations are well developed.
Only LoopVectorize and LoopUnroll are even enabled by default.

Which one passes first will have the advantage, as I think we agreed,
pass ordering is not trivial.

Which is one of the things this proposal addresses.

If you restrict this new pass to only doing transformations that are
guaranteed to be valid and better (for some definition of both),

This is a strange argument. You want transformation that are invalid
and/or worse?

then
you'll end like Polly that does wonders to a very small selection of
loops and nothing at all to most of them, just wasted time looking at
all the possibilities.

This is exactly what this proposal is addressing.

I think the small selection mostly stems from Polly requiring
well-formed IR. Very often it could algorithmically optimize a
problem, but cannot represent the IR in its internal representation: a
SCoP which is based on ISL's schedule tree representation. The main
motivation of the proposal is to exactly address this, meaning there
is no external library that restricts what we can represent.

A second reason is that Polly relies on ISL's solver algorithm that
minimizes re-use distance and parallelism while the pass-based
optimizers use hand-written heuristics. I want to make both possible
withing the same framework.

If you want to be more aggressive, then the IR will change more often
and the pass ordering problem gets worse, requiring changes in later
passes to cope with the changes.

My goal is to get a hierarchical optimization order: Once the
higher-level optimizations (that is, loops) have been decided on, only
lower-level optimizations (InstCombine, instruction motion, CFG
simplification, etc) are left to do. If we have to re-do loop
optimizations, something went very wrong.

Michael

A language where most instructions can access any memory is arguable
harder to optimize than a language where only a selected set of
instructions can do that. But the metadata describing what memory an
instruction can access is not frond-end specific.

My point is that not all front-ends have the same pace at implementing
new metadata, and the discussion as to what each means in the context
of each language can take a while.

I think the small selection mostly stems from Polly requiring
well-formed IR. Very often it could algorithmically optimize a
problem, but cannot represent the IR in its internal representation: a
SCoP which is based on ISL's schedule tree representation. The main
motivation of the proposal is to exactly address this, meaning there
is no external library that restricts what we can represent.

I see, so this is basically the old proposal of re-writing ISL into
LLVM, but with a more powerful heuristics search.

I'm not against the idea (as I wasn't back then), but this will take a
very long time, and will somewhat compete with current VPlan (engine
to find transformations) and MLIR (extensible IR) efforts.

I'm also not saying there is a clear and trivial path from VPlan and
MLIR towards your proposal. There may be many hurdles, some even
impractical, so I'm not strongly proposing it either.

But I'd like to explore all the options before starting yet-another
effort to make high-level parallelisation more tractable.

cheers,
--renato