I am trying to explore various optimization available in the XLA and MLIR. Is there a way, I can use certain flags like -o in clang and other compilers, or say LLVM optimizer manager? In the XLA documentation, I could find a few flags but that doesn’t provide means to play with the optimizations individually.
MLIR has mlir-opt that acts similarly to LLVM’s opt, i.e. lets you construct a pass pipeline manually and dump the result. Various mlir users often have their version of the tool, namely there is tf-mlir-opt in TensorFlow. XLA standalone is not (yet fully) MLIR-based, so it may be worth asking on the XLA communication channels.
@ftynse, I landed on XLA-Dev from the Mailing lists | TensorFlow page. It seems that it’s not that active. Will it be possible for you to share appropriate XLA communication channels?