Hi, as I am exploring toy example in MLIR, I have observed that the LLVM IR produced by toy example is little bit complicated for a small toy example like:
def main() {
var a = [1,2,3,4,5];
var b = [5,6,7,8,9];
var c = a+b;
}
It’s MLIR representations is
module {
toy.func @main() {
%0 = toy.constant dense<[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00, 5.000000e+00]> : tensor<5xf64>
%1 = toy.constant dense<[5.000000e+00, 6.000000e+00, 7.000000e+00, 8.000000e+00, 9.000000e+00]> : tensor<5xf64>
%2 = toy.add %0, %1 : (tensor<5xf64>, tensor<5xf64>) -> tensor<*xf64>
toy.return
}
}
But when I am emitting to LLVM IR, Its IR looks like very difficult to understand at first time.
Instead if it looks like the vector form in LLVM IR like :
define dso_local void @_Z4mainv() {
%1 = alloca <5 x double>, align 64
%2 = alloca <5 x double>, align 64
%3 = alloca <5 x double>, align 64
store <5 x double> <double 1.000000e+00, double 2.000000e+00, double 3.000000e+00, double 4.000000e+00, double 5.000000e+00>, ptr %1, align 64
store <5 x double> <double 5.000000e+00, double 6.000000e+00, double 7.000000e+00, double 8.000000e+00, double 9.000000e+00>, ptr %2, align 64
%4 = load <5 x double>, ptr %1, align 64
%5 = load <5 x double>, ptr %2, align 64
%6 = fadd <5 x double> %4, %5
store <5 x double> %6, ptr %3, align 64
ret void
}
then It will be very easy to understand.
So, what I need to know, is there any way to lower Toy IR
to LLVM IR
in vectorize way? Any suggestions will help.
Thanks