Constant memref

i faced issue when developing an embeded project using MLIR

MLIR defines memoryref dialect , normally it could be:
%0 = memref.alloc
%1 = memref.alloc
%2 = sample_dialect.add(%0, %1)
memref.alloc can be lowered to alloc or new in terms of c/c++ program.

but my scenario is different, i need a pre-alloced address rather than allocating by OS
i found that memref is not an operation(it is a datatype) and cannot instantiate it like C++
i hope it can be :
memref var1;
memref var2;
sample_dialect(var1, var2);
is it possible?

memref.global might be a solution, but its meaning might be not proper

how can I do this in MLIR?
really appreciate for your kind help :slight_smile:

Any operation can define values of the memref type. You are free to define mydialect.make_memref_at_address 0xDEADBEEF : i64 -> memref<1024xi8> and assign it the right semantics. You can think of memref as a pointer with associated shape and layout metadata.

thank you for help,
but could you plz explain more?
memref is a data type and can not be instantiated.
how to create memref object based on memrefType?

mlir::Type floatType = mlir::Float32Type::get(&context);
mlir::Type memrefType = mlir::MemRefType::get({4, 5}, floatType);

You need to be more specific… Have you followed the MLIR tutorial? Do you understand the difference between the IR and the code that creates the IR (which is different from the code that is might have been lowered to the IR)? You cannot create an object of an IR type (what you call “instantiate”) in C++; this does not make sense because IR and C++ that creates the IR are different things. You can, however, create the IR that defines an IR object of the IR type such as memref. This IR will have some operations. We don’t have an operation that defines a memref at a fixed address, so you will need to introduce a new operation in a custom dialect.

yes, you are right.
MLIR doesnot allow to create single object, all things should be expressed by operation.

by the way, could you help to take a look below MLIR ?
“lmhlo.constant”(%1) {value = dense<0xFF800000> : tensor} : (memref) → () confuses me
its defination locates at tensorflow/lhlo_ops.td at 92f1dd09bef516a6eb0ad6be6833f28785ef2be8 · tensorflow/tensorflow · GitHub
how can %1 becomes input(operand) of lmhlo.constant ?

module attributes {tf.versions = {bad_consumers = , min_consumer = 0 : i32, producer = 0 : i32}} {
func.func @main(%arg0: memref<1x224x224x3xf32>) → memref<1x55x55x96xf32> attributes {tf.entry_function = {control_outputs = “”, inputs = “Placeholder”, outputs = “maxpool1/MaxPool”}} {
%0 = memref.alloc() : memref<1x112x112x96xf32>
“lmhlo.constant”(%0) {value = dense<0.000000e+00> : tensor<1x112x112x96xf32>} : (memref<1x112x112x96xf32>) → ()
%1 = memref.alloc() : memref
“lmhlo.constant”(%1) {value = dense<0xFF800000> : tensor} : (memref) → ()
%2 = memref.alloc() : memref<7x7x3x96xf32>
“lmhlo.constant”(%2) {value =
%3 = memref.alloc() : memref<96xf32>
“lmhlo.constant”(%3) {value = dense<[-0.0159829315, 0.0515543744, 0.00242001633, 0.108720623, -2.800880e+00, 0.0132846264, 0.109862484, 2.564370e-02, 0.0286106765, 0.00628543925, -2.62454987, 0.0430002175, 6.652820e-02, 0.0671709701, 0.0830989181, 0.0367722884, 0.045740366, 0.0740528852, 0.1394234, 0.0801440253, 0.144587547, -0.424634635, -0.24156265, -0.335162252, -0.261343211, -0.456626803, -0.310935348, 0.0992531478, 0.278961778, 2.455200e-02, 0.0612100735, 0.0536732078, -0.380231261, 0.15453954, 0.0197914504, 0.00870858132, -0.00665073609, 0.306952387, -0.299116373, -0.11611747, -2.04525161, -1.851590e-02, 0.230225727, 0.00548936659, 0.0375830159, 0.159722373, 0.0059064352, 0.142434776, 0.0331431404, -0.457383722, 0.115609534, -0.147989258, -0.764221787, -0.519262373, 0.0707052052, 0.115098685, 0.100365952, -0.0294663366, 0.0876697451, 4.891170e-02, 0.138416246, 0.0470453948, 0.0939431712, -0.0662895963, 0.109367736, -0.528198779, 0.0481892154, 0.0206056554, 0.255892098, -0.018493589, 0.0286216382, 0.142589465, 0.28172341, -1.55166733, -2.12946439, 0.142471924, 0.139095396, 0.0741286352, 0.117532089, -0.0687338114, 0.0470842794, -0.129683986, 0.0512922667, 0.0276815854, 0.352964222, 0.0905589461, 0.0808553547, -0.0327927209, 5.596700e-02, 0.0737980902, 0.0151464995, -0.691875159, 0.0173704922, -0.446471632, -0.512368083, 0.0248008128]> : tensor<96xf32>} : (memref<96xf32>) → ()
%4 = memref.alloc() : memref<1x112x112x96xf32>
lmhlo.convolution(%arg0, %2, %4) dim_numbers = [b, 0, 1, f]x[0, 1, i, o]->[b, 0, 1, f], window = {stride = [2, 2], pad = [[2, 3], [2, 3]], rhs_dilate = [1, 1]} {batch_group_count = 1 : i64, feature_group_count = 1 : i64} : (memref<1x224x224x3xf32>, memref<7x7x3x96xf32>, memref<1x112x112x96xf32>) → ()
%5 = memref.alloc() : memref<1x112x112x96xf32>
“lmhlo.broadcast_in_dim”(%3, %5) {broadcast_dimensions = dense<3> : tensor<1xi64>} : (memref<96xf32>, memref<1x112x112x96xf32>) → ()
%6 = memref.alloc() : memref<1x112x112x96xf32>
“lmhlo.add”(%4, %5, %6) : (memref<1x112x112x96xf32>, memref<1x112x112x96xf32>, memref<1x112x112x96xf32>) → ()
%7 = memref.alloc() : memref<1x112x112x96xf32>
“lmhlo.maximum”(%6, %0, %7) : (memref<1x112x112x96xf32>, memref<1x112x112x96xf32>, memref<1x112x112x96xf32>) → ()
%8 = memref.alloc() : memref<1x55x55x96xf32>
“lmhlo.reduce_window”(%7, %1, %8) ({
^bb0(%arg1: memref, %arg2: memref, %arg3: memref):
%9 = memref.alloc() : memref
“lmhlo.maximum”(%arg1, %arg2, %9) : (memref, memref, memref) → ()
“lmhlo.copy”(%9, %arg3) : (memref, memref) → ()
“lmhlo.terminator”() : () → ()
}) {window_dimensions = dense<[1, 3, 3, 1]> : tensor<4xi64>, window_strides = dense<[1, 2, 2, 1]> : tensor<4xi64>} : (memref<1x112x112x96xf32>, memref, memref<1x55x55x96xf32>) → ()
return %8 : memref<1x55x55x96xf32>
}
}

Memref represents a reference to a memory address. In order for a value to be stored at that address, something must write it there. The operations that write into an address take that address as an operand because they need to know what it is.

as you can see from its defination, it doesnot take an operatnd as input.
i don’t know how this magic thing happens, maybe the MemWrite?
def LHLO_ConstantOp : LHLO_Op<“constant”, > {
let summary = “Constant operator”;
let description = [{
Represents a constant value.
}];
let arguments = (ins
ElementsAttr:$value,
Arg<LHLO_Buffer, “”, [MemWrite]>:$output
);

let hasCanonicalizer = 1;
}

As I can see from the definition, it does take the output destination as an argument, right here:

let arguments = (ins
  ElementsAttr:$value,
  Arg<LHLO_Buffer, “”, [MemWrite]>:$output  // <- here!!!
)

It may have a slightly confusing name – “output” – but it is an operand to the operation. In ODS, arguments are a mix of operation attributes and operands, with additional (typing) constraints and, occasionally, side effect indications. Please make sure you follow the tutorial and read up on the rest of the documentation to understand at least the common terminology.

1 Like