Is there a way to manually control the optimizations, like we have flags in clang?

I am trying to explore various optimization available in the XLA and MLIR. Is there a way, I can use certain flags like -o in clang and other compilers, or say LLVM optimizer manager? In the XLA documentation, I could find a few flags but that doesn’t provide means to play with the optimizations individually.

I highly appreciate any leads provided.

MLIR has mlir-opt that acts similarly to LLVM’s opt, i.e. lets you construct a pass pipeline manually and dump the result. Various mlir users often have their version of the tool, namely there is tf-mlir-opt in TensorFlow. XLA standalone is not (yet fully) MLIR-based, so it may be worth asking on the XLA communication channels.

@ftynse Thanks for the info. I will reach out to the XLA Community.

@ftynse, I landed on XLA-Dev from the Mailing lists  |  TensorFlow page. It seems that it’s not that active. Will it be possible for you to share appropriate XLA communication channels?

@herhut or @pifon2a may know

The question was answered on Mailing lists  |  TensorFlow 2 days ago.

It isn’t very active but that is the correct channel and there are folks monitoring that channel.

Yes, I got a reply there. Since it appeared to be inactive from the activity log, I thought I am in the wrong place.