Lowering of tf.Conv2D to other mlir dialects?

Hello, this is my first time posting here. Let me know if this question is more suited to be posted to a specific Tensorflow discussion forum.

I have been trying to translate tensorflow models into llvm IR (with the least amount of external calls as possible). As such I successfully used
tf-mlir-translate, tf-opt, mlir-hlo-opt and ${LLVM_BIN}/mlir-translate with very simple TF models, achieving the expected results.

However, if I try to do the same with a more complex model such as MobileNetV1 with the steps below:

# Successfully translates the model to mlir's tf dialect
bazel run //tensorflow/compiler/mlir:tf-mlir-translate -- \
    --graphdef-to-mlir \
    --tf-input-arrays=input \
    --tf-input-data-types=DT_FLOAT \
    --tf-input-shapes=1,160,160,3 \
    --tf-output-arrays="MobilenetV1/Predictions/Reshape_1" \
    ${PWD}/mobilenet_v1_0.5_160_frozen.pb \
    -o ${PWD}/mobilenet.mlir


# Error due to missing mlir lowerings on several operations `tf`
bazel run //tensorflow/compiler/mlir:tf-opt -- \
    --tf-executor-to-functional-conversion \
    --tf-region-control-flow-to-functional \
    --xla-legalize-tf \
    ${PWD}/mobilenet.mlir \
    -o ${PWD}/mobilenet-mhlo.mlir

I get the following error message:

/working_dir/tmp/mobilenet/mobilenet.mlir:4:3: error: The following operations cannot be legalized: tf.AvgPool (count: 1); tf.Conv2D (count: 15); tf.DepthwiseConv2dNative (count: 13); tf.Relu6 (count: 27); tf.Reshape (count: 2); tf.Shape (count: 1); tf.Softmax (count: 1); tf.Squeeze (count: 1). These legalization failure(s) may be due to missing TF to HLO lowerings and/or unsupported attributes, etc.
  func @main(%arg0: tensor<1x160x160x3xf32>) -> tensor<*xf32> attributes {tf.entry_function = {control_outputs = "", inputs = "input", outputs = "MobilenetV1/Predictions/Reshape_1"}} {
  ^
/working_dir/tmp/mobilenet/mobilenet.mlir:4:3: error: Emitting more detail about one op that failed to legalize...
  func @main(%arg0: tensor<1x160x160x3xf32>) -> tensor<*xf32> attributes {tf.entry_function = {control_outputs = "", inputs = "input", outputs = "MobilenetV1/Predictions/Reshape_1"}} {
  ^
/working_dir/tmp/mobilenet/mobilenet.mlir:362:61: error: 'tf.AvgPool' op is not legalizable
      %outputs_710, %control_711 = tf_executor.island wraps "tf.AvgPool"(%outputs_708) {data_format = "NHWC", device = "", ksize = [1, 5, 5, 1], padding = "VALID", strides = [1, 2, 2, 1]} : (tensor<*xf32>) -> tensor<*xf32>
                                                            ^
/working_dir/tmp/mobilenet/mobilenet.mlir:362:61: note: see current operation: %357 = "tf.AvgPool"(%356) {data_format = "NHWC", device = "", ksize = [1, 5, 5, 1], padding = "VALID", strides = [1, 2, 2, 1]} : (tensor<*xf32>) -> tensor<*xf32>

The operations flagged as illegal are the following:
tf.AvgPool, tf.Conv2D, tf.DepthwiseConv2dNative, tf.Relu6, tf.Reshape, tf.Shape, tf.Softmax, tf.Squeeze

As of right now, is there are way to lower these tf operations into mhlo or other lower dialect? Ideally I would like to avoid calls to external libraries in the final llvm dialect.

Platform info:
Working of Tensorflow [master](https://github.com/tensorflow/tensorflow) branch, commit: e08252041dc6a1ff96ce8272ebf70114798fa86b
Using docker image: tensorflow/tensorflow:nightly-custom-op-ubuntu16

I saw the issue when tried to lower ResNet mlir tf to mhlo with pass -xla-legalize-tf

@agostini01 the issues are gone with the commands below.

tf-mlir-translate –
–graphdef-to-mlir --tf-enable-shape-inference-on-import=true
–tf-input-arrays=input --tf-input-shapes=1,128,128,3 ${PWD}/mobilenet_v1_0.25_128_frozen.pb -o mobilenet_v1_tf.mlir

tf-opt – \
–tf-executor-island-coarsening -canonicalize
–xla-legalize-tf
mobilenet_v1_tf.mlir -o mobilenet_v1_hlo.mlir

@yongwww The issues are gone with the commands you gave but the resultant mhlo doesn’t have any ops?

I would recommend using some of the registered pipelines there, tf-to-hlo-pipeline should give better result than xla-legalize-tf.

1 Like