sample IR works fine.
func.func @sample() {
%1 = arith.constant dense<[3]> : tensor<1xi8>
return
}
memref.global "private" constant @__constant_1xi8 : memref<1xi8> = dense<3> {alignment = 64 : i64}
func.func @sample() {
%0 = memref.get_global @__constant_1xi8 : memref<1xi8>
return
}
But is there elegant way to specify the space of memref for constant op ?
defaultMemorySpaceFn works, unfortunately, it can NOT specify different space for different constant ops.
Tensor encoding is alternavite way, but it doesn’t work for constant op 
func.func @sample() {
%1 = arith.constant dense<[3]> : tensor<1xi8, 2>
return
}
error: 'memref.global' op initial value expected to be of type 'tensor<1xi8>', but was of type 'tensor<1xi8, 2 : i64>'
does anyone know other methods?
thank you very much.
Tensor IR specifies what to compute, but not where the data is located. If you want a buffer in a specific memory space, I’d recommend to put an explicit allocation. E.g.:
memref.global "private" constant @__constant_1xi8 : memref<1xi8, 2> = dense<3> {alignment = 64 : i64}
func.func @sample() {
%0 = memref.get_global @__constant_1xi8 : memref<1xi8, 2>
%1 = bufferization.to_tensor %0 restrict : memref<1xi8, 2>
return
}
Make sure to put the restrict attribute. Otherwise, One-Shot Bufferize will reject the IR.
1 Like
yes, i understand the difference regarding tensor and buffer(memref).
the issue is about how to inject memory space info. for this buffer in the process of bufferization.
the method i can think of is based on encoding information of Tensor Type
// input MLIR
func.func @sample() {
%1 = arith.constant dense<[3]> : tensor<1xi8, 2> // 2 is expected memory space
return
}
// command
mlir-opt --one-shot-bufferize="use-encoding-for-memory-space" input.mlir -o output.mlir
// Expected output MLIR
memref.global "private" constant @__constant_1xi8 : memref<1xi8, 2> = dense<3> {alignment = 64 : i64}
func.func @sample() {
%0 = memref.get_global @__constant_1xi8 : memref<1xi8, 2> // 2 is expected memory space
return
}
Unfortunately, bufferization throws error,
error: 'memref.global' op initial value expected to be of type 'tensor<1xi8>', but was of type 'tensor<1xi8, 2 : i64>'
it looks like a bug of MLIR. ← need help to confirm.
if it is not a bug. ← need help to solve my problem.