How to lower linalg.map?

I’m converting a tensorflow model to MLIR, but I’m having a problem, I can’t lower linalg.map,this caused me to not be able to convert to llvm ir…I looked at some of lower’s test cases, but I have a question. Why can’t some of the passes be used in, for example -test-tiling-interface=lower-to-scalar-using-scf-for.I have a question, what does the pass at the beginning of test mean?Here is some of the code.If anyone can help me, I would be grateful.

// Here are some of the passes I use
mlir-opt resnet.mlir -pass-pipeline="builtin.module(func.func(tosa-to-linalg- 
  named), \
  func.func(tosa-to-tensor),func.func(tosa-to-linalg),func.func(tosa-to-arith))"  |  \
mlir-opt -linalg-bufferize -empty-tensor-to-alloc-tensor -arith-bufferize  \
 -tensor-bufferize -func-bufferize -buffer-deallocation  -convert-linalg-to-loops \ 
 -convert-vector-to-scf  -convert-scf-to-cf  \ 
 -convert-vector-to-llvm -arith-expand -convert-math-to-llvm \
 -expand-strided-metadata -finalize-memref-to-llvm  -convert-math-to-llvm \
 -convert-func-to-llvm -llvm-request-c-wrappers -reconcile-unrealized-casts

Here is part of the code of the model, I found how I could not lower the linag.map.

    %6076 = bufferization.to_tensor %6075 : memref<1x305x305x3xf32>
    %6077 = llvm.extractvalue %6074[0] : !llvm.struct<(ptr, ptr, i64, array<4 x i64>, array<4 x i64>)>
    llvm.call @free(%6077) : (!llvm.ptr) -> ()
    %mapped = linalg.map outs(%6076 : tensor<1x305x305x3xf32>)
      () {
        linalg.yield %28 : f32
      }

Sorry for asking this question here, it’s actually a simple question, I think I’m still missing the relevant experience, I found the problem there now, although I used convert-linalg-to-loops, I didn’t think that the linalg.map was generated after the first run of convert-linalg-to-loops and I needed to use convert-linalg-to-loops a second time.

When you use convert-linalg-to-loops a second time, is the problem solved? I look at official test cases and there will appear linalg.map Op while using -tensor-bufferize. please look at this test case

// CHECK-LABEL: func @tensor.generate(
// CHECK-SAME: %[[ARG:.]]: tensor<xf32>,
// CHECK-SAME: %[[DYNAMIC_EXTENT:.
]]: index) → tensor<?xindex> {
// CHECK-DAG: %[[ARG_M:.
]] = bufferization.to_memref %[[ARG]] : memref<xf32>
// CHECK-DAG: %[[ALLOC:.
]] = memref.alloc(%[[DYNAMIC_EXTENT]]) {{.}} : memref<?xindex>
// CHECK: %[[ALLOC_T:.
]] = bufferization.to_tensor %[[ALLOC]]
// CHECK: %[[MAPPED:.]] = linalg.map
// CHECK: outs(%[[ALLOC_T]] : tensor<?xindex>)
// CHECK: %[[INDEX:.
]] = linalg.index 0 : index
// CHECK: %[[ELEM:.*]] = memref.dim %[[ARG_M]], %[[INDEX]] : memref<*xf32>
// CHECK: linalg.yield %[[ELEM]]
// CHECK: }
// CHECK: return %[[MAPPED]] : tensor<?xindex>
// CHECK: }
func.func @tensor.generate(%arg: tensor<*xf32>, %dynamic_extent: index) → tensor<?xindex> {
%result = tensor.generate %dynamic_extent {
^bb0(%i : index):
%elem = tensor.dim %arg, %i : tensor<*xf32>
tensor.yield %elem : index
} : tensor<?xindex>
return %result : tensor<?xindex>
}

Yes,the problem had been solved.If you have a problem, you can make it clearer.

Sorry,could you show me your corrected pipeline?

I’m only showing a snippet of it here (because I customized some passes), but I think it’s good enough, and what you can see is that I used convert-linalg-to-loops twice.

  -linalg-bufferize 
  -empty-tensor-to-alloc-tensor 
  -arith-bufferize 
  -tensor-bufferize 
  -func-bufferize   
  -convert-linalg-to-loops  
  -convert-vector-to-scf 
  -linalg-bufferize 
  -buffer-deallocation
  -convert-vector-to-scf 
  -convert-scf-to-cf 
  -convert-vector-to-llvm 
  -arith-expand 
  -convert-math-to-llvm 
  -convert-linalg-to-loops 
  -expand-strided-metadata 

thanks for your reply,i will try your pipeline.

You are welcome.This pipeline is what I use to lower tenserflow’s model. Maybe you don’t need so many passes. it is recommended that you can try them one by one and understand what each one does. You can find all the passes here.