MLIR Tutorial Ch6 failed with error “Dialect `func' not found for custom op 'func.func‘”

I have successfully followed the first 5 lessons in the MLIR tutorial. Now I can lower the toy operations to memref, arith, affine, func.

However, when I try toyc-ch6 -emit=mlir-llvm example.mlir on the following example.mlir:

module {
  func.func @main() {
    %cst = arith.constant 6.000000e+00 : f64
    %cst_0 = arith.constant 5.000000e+00 : f64
    %cst_1 = arith.constant 4.000000e+00 : f64
    %cst_2 = arith.constant 3.000000e+00 : f64
    %cst_3 = arith.constant 2.000000e+00 : f64
    %cst_4 = arith.constant 1.000000e+00 : f64
    %alloc = memref.alloc() : memref<3x2xf64>
    %alloc_5 = memref.alloc() : memref<3x2xf64>
    %alloc_6 = memref.alloc() : memref<2x3xf64>
    affine.store %cst_4, %alloc_6[0, 0] : memref<2x3xf64>
    affine.store %cst_3, %alloc_6[0, 1] : memref<2x3xf64>
    affine.store %cst_2, %alloc_6[0, 2] : memref<2x3xf64>
    affine.store %cst_1, %alloc_6[1, 0] : memref<2x3xf64>
    affine.store %cst_0, %alloc_6[1, 1] : memref<2x3xf64>
    affine.store %cst, %alloc_6[1, 2] : memref<2x3xf64>
    affine.for %arg0 = 0 to 3 {
      affine.for %arg1 = 0 to 2 {
        %0 = affine.load %alloc_6[%arg1, %arg0] : memref<2x3xf64>
        affine.store %0, %alloc_5[%arg0, %arg1] : memref<3x2xf64>
      }
    }
    affine.for %arg0 = 0 to 3 {
      affine.for %arg1 = 0 to 2 {
        %0 = affine.load %alloc_5[%arg0, %arg1] : memref<3x2xf64>
        %1 = arith.mulf %0, %0 : f64
        affine.store %1, %alloc[%arg0, %arg1] : memref<3x2xf64>
      }
    }
    toy.print %alloc : memref<3x2xf64>
    memref.dealloc %alloc_6 : memref<2x3xf64>
    memref.dealloc %alloc_5 : memref<3x2xf64>
    memref.dealloc %alloc : memref<3x2xf64>
    return
  }
}

It reports an error:

loc("toy1.mlir":2:3): error: Dialect `func' not found for custom op 'func.func'
Error can't load file example.mlir

The mlir is built at commit [C++20] [Modules] Emit full specialization of variable template as av… · llvm/llvm-project@367e618 · GitHub

Maybe you should use toy.func instead of func.func in your example.mlir:2

I tried, it doesn’t work.

-emit=mlir-llvm option will apply full convertion from Arith, Affine, Func, Memref dialects and toy.print operation to LLVM dialect. However, it told me func Dialect is unknown, which is stranege.

The input to toyc can only be the toy dialect. If you replace func.func with toy.func it’ll complain about the arith dialect being unknown.

If you want to play and modify toy to be able to load .mlir files with these dialects, you can change mlir/examples/toy/Ch6/toyc.cpp around line 269 right after the MLIRContext is created and do:

  mlir::MLIRContext context;
  DialectRegistry registry;
  registry.insert<arith::ArithDialect, func::FuncDialect,
                    memref::MemRefDialect, affine::AffineDialect>();
  context.appendDialectRegistry(registry);

(you’ll need the right includes also of course)

I see. Thanks!

Now I got another question.

I was trying to convert the tensor and func Dialect to LLVM IR.

I modified Ch6/toyc.cpp

registry.insert<mlir::func::FuncDialect, mlir::tensor::TensorDialect>();

and Ch6/mlir/LowerToLLVM.cpp

  populateFuncToLLVMConversionPatterns(typeConverter, patterns);
  populateTensorToLinalgPatterns(patterns);
  populateLinalgToLLVMConversionPatterns(typeConverter, patterns);

When I run toyc-ch6 -emit=llvm example.mlir, it reports “loc(“temp.mlir”:2:3): error: failed to legalize operation ‘func.func’”.

example.mlir:

module {
  func.func @func1(%arg0: tensor<3x4xf32>) {  
    return  
  }
}

Have a closer look to populateTensorToLinalgPatterns(patterns);, it has only one conversion pattern, clearly it’s not the correct way.

How can I lower the tensor Dialect to llvm?

Or how can I lower the tensor to memref and reuse populateMemRefToLLVMConversionPatterns?

@mehdi_amini Do you have any suggestions? :slight_smile:

Going from tensor to LLVM requires some intermediate steps, and in particular a step of “Bufferization”. This is where we turn the value-based tensor into mutable buffers (memref).
See here: Bufferization - MLIR

There are some tests in the codebase as well, somewhere under mlir/test/Integration/ I think.

1 Like

Thanks for your suggestions. I’ve succeeded to do bufferization. But there still exist gaps in lowering to LLVM IR.

For example, the following MLIR program, including a tensor.empty op, I cannot lower it to LLVM IR so far.

module {
  func.func @func1() {
    %cst = arith.constant 1.66223693E+9 : f32
    %1 = tensor.empty() : tensor<4xf32>
    %9 = tensor.empty() : tensor<1xf32>
    %c1 = arith.constant 1 : index
    %alloc_51 = memref.alloc() : memref<9xf32>
    affine.store %cst , %alloc_51[%c1] : memref<9xf32>
    %26 = affine.if affine_set<(d0) : (-((d0 - 16) ceildiv 2) >= 0)>(%c1) -> tensor<4xf32> {
      memref.store %cst, %alloc_51[%c1] : memref<9xf32>
      %233 = math.expm1 %9 : tensor<1xf32>
      affine.yield %1 : tensor<4xf32>
    }
    return
  }
}

First, when invoking mlir-opt --tensor-bufferization temp.mlir, the error msg suggest me to transform this op into bufferization.alloc_tensor. I did so by calling bufferization::populateEmptyTensorToAllocTensorPattern, it produce bufferization.to_tensor op.

After that I want to lower these ops to LLVM IR. However I searched the whole project, and I only found one OpConversionPattern<bufferization::ToTensorOp> in mlir/lib/Dialect/Bufferization/Transform which is used to bufferize this op.

Do you have suggestions on this question?

@mehdi_amini

The tensor.empty() op is a bit special: it isn’t supposed to be used as a value but only its shape should be used. This is why I think it does not buffering by default…

That said looking at the code where the suggestion come from:

    // tensor.empty ops are used to indicate the shape of a tensor. They have
    // no defined contents and cannot be bufferized. However, they can be
    // converted to bufferization.alloc_tensor ops, which then bufferize to an
    // allocation (--empty-tensor-to-alloc-tensor).
    return op->emitOpError("cannot be bufferized, but can be converted to "
                           "bufferization.alloc_tensor");

The comment provides the pass to use here, and indeed if I run it on your example I get:

% bin/mlir-opt --empty-tensor-to-alloc-tensor -one-shot-bufferize="allow-unknown-ops"  /tmp/b.mlir
#set = affine_set<() : (7 >= 0)>
module {
  func.func @func1() {
    %c1 = arith.constant 1 : index
    %cst = arith.constant 1.66223693E+9 : f32
    %alloc = memref.alloc() {alignment = 64 : i64} : memref<4xf32>
    %0 = bufferization.to_tensor %alloc : memref<4xf32>
    %alloc_0 = memref.alloc() : memref<9xf32>
    affine.store %cst, %alloc_0[1] : memref<9xf32>
    %1 = affine.if #set() -> tensor<4xf32> {
      memref.store %cst, %alloc_0[%c1] : memref<9xf32>
      affine.yield %0 : tensor<4xf32>
    }
    memref.dealloc %alloc : memref<4xf32>
    return
  }
}

Note: you should probably start a new thread if you have questions unrelated with the title of this one, the current title won’t attract people who are the most knowledgeable about bufferization.

1 Like

I’ll move to a new thread. :slight_smile: