I don’t have a strong preference, though it is clear we should pick one. I’d mildly prefer the select form for readability. From an optimization standpoint, I see reasonable arguments for either.
I have a slight preference towards shufflevector, because it makes sequences of shuffles, where only some of the shuffles can be converted into selects (because the input and output vector sizes of the others don’t match) simpler to reason about.
I’m not sure this is a particularly good reason, though.
I must admit, I prefer the shuffle canonicalization, but mainly because we have put a lot of effort into finding optimal instruction sequences for obscure shuffle patterns. But we could refactor easily enough to use either.
I don’t know which makes the most logical sense in this case though. Certainly choosing the select pattern better matches OpenCL’s native select interface.
x86 has also put a lot of effort into shuffle lowering…so much so that it is its own life-form and brings most online codeviewer apps to their knees when you try to open X86ISelLowering.cpp.
Given that:
There are at least 2 targets that lean towards shuffle (Martin’s comment + x86 uses lowerVSELECTtoVectorShuffle() for all cases like the example posted here)
Size-changing shuffles are easier to reason about with other shuffles (Michael’s comment)
Insert/extract are easier to reason about with shuffles (Eli’s comment in D22114)
…we should probably go with shuffle as the canonical encoding. Like Philip, I think the select is easier to read in IR (and mentally translate to an x86 ‘blend’), but there’s no other advantage for select?
I’ll give this thread some more time before posting a patch…in case we’ve missed something.
From: "Sanjay Patel via llvm-dev" <llvm-dev@lists.llvm.org>
To: "Martin ORiordan" <Martin.ORiordan@movidius.com>
Cc: "LLVM Developers" <llvm-dev@lists.llvm.org>
Sent: Monday, August 29, 2016 5:45:51 PM
Subject: Re: [llvm-dev] IR canonicalization: vector select or
shufflevector?
x86 has also put a lot of effort into shuffle lowering...so much so
that it is its own life-form and brings most online codeviewer apps
to their knees when you try to open X86ISelLowering.cpp.
Given that:
1. There are at least 2 targets that lean towards shuffle (Martin's
comment + x86 uses lowerVSELECTtoVectorShuffle() for all cases like
the example posted here)
This is irrelevant, as such. We can always transform these into shuffle SDAG nodes regardless of how they look in the IR.
That having been said, I'm fine with choosing shuffles as the canonical form, over selects with constant vector conditions - If we don't, we'd need some utility to abstract away the difference regardless.