MLIR for tf-mikro (ARM Cortex-M)


I’d like to understand if there is any support or information regarding MLIR support for TensorFlow Lite for Microcontrollers targeting ARM MCUs based with CPUs such Cortex-M4, Cortex-M7, etc…?

Thank you.


Which parts are you interested in? E.g., codegen of a TFlIte micro kernel, translation to/from TFlite micro?



Ideally all :slight_smile: but mainly the translation into → TensorFlow Lite Micro

TFLite Micro does use TFLite format, and the TF to TFLite converter is written using MLIR & there is a TFLite dialect. But there is nothing more beyond that (e.g., we don’t have any special behavior for TFLite micro). I think it would be quite interesting - at the moment I could see that one would need to kernel gen separately, the CPU codegen work could be used there but additional work needed for generating wrappers that conform to the interface expected. And for export from TFLite they do support flex ops, so there might be even be a flow possible whereby the generated kernel & wrapper gets dropped in such that TFlite micro picks it up :slight_smile: But that is more speculative.

Would you be able to say more about your use case? I’d gladly pull in Pete from micro side.

– Jacques

I’m looking for a way to effectively run TensorFlow models with various flavor ARM Cortex-M microcontrollers. Find the most optimal way to do it such TF-Mikro + CMSIS-NN, XLA-compile, XLA-AoT (Ahead of Time) compilation, and whilst targeting Cortex M CPUs (CM4, CM7, etc.) where NN models are optimized at various level for execution (runtime or AoT).
The target microporessors can’t just be single-core but mutlicore CPUs.
Can MLIR somehow be in assistance in this?