[RFC][memref][spirv] Targeting Images in Memref -> SPIR-V Conversions

Hey folks,

We’ve been investigating SPIR-V codegen targeting some of the image ops which
isn’t something MLIR’s upstream SPIR-V conversions currently support. Our plan
is to lower memref.loads and memref.stores which operate on memrefs in the
“Image” storage class to image access operations in SPIR-V. (Note that since:
#144899 memrefs in the
memory address space 12 can be lowered to the Image storage class in SPIR-V).

Right now only
spirv.ImageRead
and
spirv.ImageWrite
are supported in the SPIR-V MLIR dialect. However once
#145873 is merged we will also have
the choice to target spirv.ImageFetch. We are about to start working on a
memref → SPIR-V conversion for memrefs in the Image storage class as described
above. Given that we’d like to upstream this rewrite we have a run into an
issue we’d like to request feedback on from the community. For our purposes we
wish to lower memref.load to spirv.ImageFetch however spirv.ImageRead
feels like just as valid conversion target given the source of the conversion
in either case will be a memref with the Image storage class.

Since spirv.ImageFetch requires an image with the “Sampled” operand set to
“NeedsSample” whilst spir.ImageRead requires the same operand be set to
“SamplerUnknown” or “NoSampler” we wondering whether there was any extra information
we could provide to the memref in addition to the Image storage class to
indicate that the memref in question should be lowered to a sampled image and
can therefore (through an intermediate call to spirv.Image) be accessed via
spirv.ImageFetch otherwise the memref would be lowered to an image without a
sampler and accessed via spirv.ImageRead.

Is anyone aware of a mechanism other than an additional storage class we could
use to achieve this? Or if there has already been any upstream or downstream
work in this area we may have missed?

Cheers in advance :slight_smile:

Pinging @kuhar and @antiagainst since I’m told you might have relevant input / expertise on this :slight_smile:

I’d use an additional memref memory space - so memref<...xT, #spirv.image> or memref<...xT, #spirv.image<StorageClass>>

1 Like

Hey, thanks for your response. I’m not sure I fully understand what you’re suggesting here. By the time we get to SPIR-V conversion the memory address spaces for memrefs are expected to be SPIR-V storage classes e.g. StorageBuffer, Image, Input etc. so I’m not sure what the additional <StorageClass> attribute on the #spirv.image<StorageClass> gives us beyond what we already have. What we need is a way to distinguish memrefs in the Image address space that should be accessed with an OpImageFetch instruction vs memrefs that should be accessed with an OpImageRead instruction.

Hi

Thanks for the context (about being in storage class Image and trying to distinguish ImageFetch vs ImageRead).

I’m suggesting that the invariant you stated above about memrefs having a storage class be relaxed slightly.

That is, that you add a “virtual” storage class … maybe #spirv.storage_class<FetchableImage> or #spirv.pseudo_storage_class<ImageFetch> or … you’ll have a better time inventing the name and where to put it) which is just Image except you use OpImageFetch instead of OpImageRead

Hey,

Yep okay this makes sense, we were considering proposing something along these lines. As long as adding these pseudo storage classes (which don’t map 1-1 to anything in the SPIR-V spec.) to the MLIR dialect is acceptable to the dialect owners then I think this will work.

Thanks for the help.