[RFC] Continuing with bufferization::{TensorLike, BufferLike} - op semantics update in bufferization

I see. So the BufferizableOpInterface::getBufferType is a building block of BufferizableOpInterface::bufferize kind of and simultaneously the method to preliminary query the output buffer without actually bufferizing. I guess the renewed logic is then something like:

  • Try BufferizableOpInterface::getBufferType if op supports BufferizableOpInterface
  • Go to “new getBufferType API” otherwise

(the cases could likely be enclosed into the free-standing getBufferType function or something along these lines)

Do you mean at API level or in implementation? I guess the latter does make sense indeed. I’d still prefer BufferLikeType BufferizableOpInterface::getBufferType(TensorLikeType, ...) at the signature level though.

Yes. Member function - clear enough (again, main concern is that we couple “types” and “operations” on these types). Non-member function:

  • bufferizationOptions.getBufferType(TensorLike) → BufferLike (smth - options object)
  • converter.getBufferType(TensorLike) → BufferLike (smth - some converter object that is a DialectInterface that one just creates for user-specified tensor/memref - they likely live in own dialect anyway)
    • this has to live inside the context forever though (so slightly higher memory usage)

From the standpoint of custom tensors / memrefs support, options is probably most cumbersome because they require one to basically reimplement one-shot-bufferization pass itself (not the underlying mlir::runOneShotModuleBufferize() function with the actual logic though) - because that’s the only way to “seed” options object? For instance, this is what we do in our downstream and it works fine, I am not against this in general.