I am of the opinion that handling scalable vectors (SV)
as builtins and an opaque SV type is a good option:
1. The implementation of SV with builtins is simpler than changing the IR.
2. Most of the transforms in opt are scalar opts; they do not optimize
vector operations and will not deal with SV either.
3. With builtins there are fewer places to pay attention to,
as most of the compiler is already dealing with builtins in
a neutral way.
4. The builtin approach is more targeted and confined: it allows
to amend one optimizer at a time.
In the alternative of changing the IR, one has to touch all the
passes in the initial implementation.
Interestingly, with similar considerations, I've come to the opposite
conclusion. While in theory the intrinsics and opaque types are more
targeted and confined, this only remains true *if* we don't end up
teaching a bunch of transformations and analysis passes about them.
However, I feel it is inevitable that we will:
1. While we already have unsized types in the IR, SV will add more of
them, and opaque or otherwise, there will be some cost to making all of
the relevant places in the optimizer not crash in their presence. This
cost we end up paying either way.
2. We're going to end up wanting to optimize SV operations. If we have
intrinsics, we can add code to match (a + b) - b => a, but the question
is: can we reuse the code in InstCombine which does this? We can make
the answer yes by adding sufficient abstraction, but the code
restructuring seems much worse than just adjusting the type system.
Otherwise, we can't reuse the existing code for these SV optimizations
if we use the intrisics, and we'll be stuck in the unfortunate situation
of slowing rewriting a version of InstCombine just to operate on the SV
intrinsics. Moreover, the code will be worse because we need to
effectively extract the type information from the intrinsic names. By
changing the type system to support SV, it seems like we can reuse
nearly all of the relevant InstCombine code.
3. It's not just InstCombine (and InstSimplify, etc.), but we might
also need to teach other passes about the intrinsics and their types
(GVN?). It's not clear that the problem will be well confined.
5. Optimizing code written with SV intrinsic calls can be done
with about the same implementation effort in both cases
(builtins and changing the IR.) I do not believe that changing
the IR to add SV types makes any optimizer work magically out
of a sudden: no free lunch. In both cases we need to amend
all the passes that remove inefficiencies in code written with
SV intrinsic calls.
6. We will need a new SV auto-vectorizer pass that relies less on
if-conversion, runtime disambiguation, and unroll for the prolog/epilog,
It's not obvious to me that this is true. Can you elaborate? Even with
SV, it seems like you still need if conversion and pointer checking, and
unrolling the prologue/epilogue loops is handled later anyway by the
full/partial unrolling pass and I don't see any fundamental change there.
What is true is that we need to change the way that the vectorizer deals
with horizontal operations (e.g., reductions) - these all need to turn
into intrinsics to be handled later. This seems like a positive change,
as the HW is helping with all these cases and expands the number
of loops that can be vectorized.
Having native SV types or just plain builtins is equivalent here
as the code generator of the vectorizer can be improved to not
generate inefficient code.
This does not seem equivalent because while the mapping between scalar
operations and SV operations is straightforward with the adjusted type
system, the mapping between the scalar operations and the intrinsics
will require extra infrastructure to implement the mapping. Not that
this is necessarily difficult to build, but it needs to be updated
whenever we otherwise change the IR, and thus adds additional
maintenance cost for all of us.