Controlling HW-specific pattern injection

Hi everyone,

I just sent out a WIP PR in which I ad an AVX2-specific vector.transpose lowering and expose it to LinalgStrategyPasses. This is enough for me to be functional but I am not a fan of the layering.

The trick is that generic vector.transpose transform and lowering patterns need to be applied at the same time as “more beneficial” AVX2-specific patterns.

I am wondering if people have strong opinions on how this control should be done. I’ve heard in the past talks of TargetMachine-like abstractions for MLIR but I am unaware of any progress in the field.

Ideally I am not looking for being blocked for weeks for something principled to emerge when I can easily make progress, isolate HW-specific options / patterns and refactor later.

Still, we need to start the conversation at some point…

So, here goes: ⚙ D113347 [mlir][X86Vector] Add specialized vector.transpose lowering patterns for AVX2.

Thanks!

1 Like

Right, but as most of the things, it is hard to design in a vacuum and we punted on all this until the codeine pipelines were to the point where this can be co-designed.
It’s great if we reached this point :slight_smile:
Should we set up a few meetings to try to scope this?

+1 - are there any old docs or ideas that might be worth dusting off to bootstrap the space of options?

I don’t remember of a specific doc.

Should we discuss this during the public meeting this week?

RFC: Enhancing Machine Retargetability in MLIR - #9 by stephenneuendorffer was the closest I could think of in terms of docs.

OOC For the current case here, what would be needed? E.g., is this requiring something like “get me size of vector tile sizes supported in HW” and then everything else is in place to avoid making linalg lowering avx-2 aware?