This strategy becomes extremely slow (in terms of compile time) as the size of constant increases.
Therefore, I used memref::TensorStoreOp to lower tensor to memref as follows.
However, there is no lowering passes provided by MLIR project, which lowers memref::TensorStoreOp to operators of LLVM Dialect.
Do you recommend me to lower memref::TensorStoreOp to LLVM::*Ops,
or other strategy to lower large tensor to memref type?
The idea is to hold an arith.constant which is a tensor, and at the bufferization stage large tensors are lowered to memref.global. This is also lowered to llvm. It means you will have a large constant on the data memory and it will be loaded from there when needed, this is the memref.get_global you see.
I am trying to convert toy::ConstantOp to Memref::GlobalOp as mentioned here, but not able find a path to do, can anyone give some example passes to do it?