[MLIR] How to define tensor constants with std.constant?

I’m trying to define some tensor constants with std.constant but notice mlir-opt just crashes with some very simple cases. I have my test cases listed below for reference. Is constant tensor a supported scenario for std.constant? I probably missed something obvious. Thanks.

//crash at mlir-opt --convert-std-to-llvm std.constant.mlir

func @tensor_constant_0d_f32() -> tensor<f32> {
    %cst = std.constant dense<0.0> : tensor<f32>
    return %cst : tensor<f32> 
}

//crash at mlir-opt --convert-std-to-llvm std.constant.mlir

func @tensor_constant_1d_f32() -> tensor<4xf32>  {
    %cst = std.constant dense<[1.0, 2.0, 3.0, 4.0]> : tensor<4xf32>
    return %cst : tensor<4xf32>
}

// crash at mlir-opt --convert-std-to-llvm std.constant.mlir

func @tensor_constant_2d_f32() -> tensor<3x4xf32> {    
    %0 = std.constant dense<[[0.0, 0.0, 0.0, 0.0], [1.0, 1.0, 1.0, 1.0], [2.0, 2.0, 2.0, 2.0]]>
         : tensor<3x4xf32>
    return %0 : tensor<3x4xf32>
}

The std to llvm pass isn’t meant to handle tensor types - these types are too high level an abstraction for it. But a crash is always a bug! It should’ve been a lowering failure. Please use either memory or scalar types when going into the LLVM dialect. One could build a type conversion pass that converts such ops on tensor types to either memref’s or scalar types (in the case of 0-d tensors) on the std dialect, but such a thing doesn’t exist. Often these conversions are performed right on the higher level dialects that actually use those tensor types extensively in conjunction with other compute ops (as opposed to just those that generate constants).

1 Like

Filed https://llvm.org/PR47775 to track turning the crash into a proper error reporting.

@ruizhang you may be interested in the talk “2020-09-24: Buffer Allocation in MLIR” that you can find here: Talks - MLIR

1 Like

These don’t seem to crash as of 60cf8453d0beeb510900eda82b5a26b21af49907.