Got error complaining about dominance on IR seems valid

With this patch ⚙ D124001 Support non identity layout map for reshape ops in MemRefToLLVM lowering on MemRefToLLVM which lowers memref.collapse_shape to llvm. I got an error with this IR dump saying

** Insert  : 'builtin.unrealized_conversion_cast'(0x5fa3270)
** Insert  : 'builtin.unrealized_conversion_cast'(0x5fa31c0)
../memref-to-llvm-local.mlir:17:8: error: operand #0 does not dominate this use
  %0 = tensor.collapse_shape %arg0 [[0], [1, 2], [3]]: tensor<2x?x?x?xf32> into tensor<2x?x?xf32>
       ^
../memref-to-llvm-local.mlir:17:8: note: see current operation: %30 = "llvm.insertvalue"(%28, %29) {position = [4, 2]} : (!llvm.struct<(ptr<f32>, ptr<f32>, i64, array<3 x i64>, array<3 x i64>)>, i64) -> !llvm.struct<(ptr<f32>, ptr<f32>, i64, array<3 x i64>, array<3 x i64>)>

The most relevant part of the IR are

  %30 = "llvm.insertvalue"(%28, %29) {position = [4, 2]} : (!llvm.struct<(ptr<f32>, ptr<f32>, i64, array<3 x i64>, array<3 x i64>)>, i64) -> !llvm.struct<(ptr<f32>, ptr<f32>, i64, array<3 x i64>, array<3 x i64>)>
  %31 = "memref.collapse_shape"(%arg0) {reassociation = [[0], [1, 2], [3]]} : (memref<2x?x?x?xf32>) -> memref<2x?x?xf32>

The def is before the use in the same basic block back to back so it seems to me the error is reported incorrectly. Am I missing something here?

I misunderstood the error msg. %0 = tensor.collapse_shape %arg0 [[0], [1, 2], [3]]: tensor<2x?x?x?xf32> into tensor<2x?x?xf32> is just the location rather than the use it’s complaining about. see current operation: %30 is referring to the use rather than the defining op.

Also, even %28 is printed out in bb2 it’s actually really in bb0 due to how the insertion point is set up when %28 is inserted.