Hi!
When I read canonicalization passes of tosa, I found this one:
OpFoldResult tosa::ExpOp::fold(FoldAdaptor adaptor) {
auto input = getInput1();
// Element-wise exp(log(x)) = x
if (auto op = input.getDefiningOp<tosa::LogOp>()) {
return op.getInput1();
}
return {};
}
However, if x
is too large, then exp(x)
would overflow and might not be x
after applying log
. Am I misunderstanding here? Why this optimization is valid?
Thanks for help!
All the best,
Yuyou
Hi @Hatsunespica ,
I suspect you’re correct here and that this is a bug, I think we should remove these canonicalizations. Possibly we could re-introduce these optimizations under a “fast-math“ sort of transformation if they are useful.
1 Like
This seems like a typical use of peephole transformation specifically to avoid such overflow cases as the OP had mentioned. Would like to know why this is wrong in a bit more detail.
From: Luke Hutton via LLVM Discussion Forums notifications@llvm.discoursemail.com
Date: Tuesday, August 26, 2025 at 11:07 AM
To: Muthu Annamalai mannamalai@cornami.com
Subject: [EXTERNAL] [LLVM] [MLIR/TOSA] Question about canonicalization tensor.log(tensor.exp(x)) -> x
Hi @Hatsunespica ,
I suspect you’re correct here and that this is a bug, I think we should remove these canonicalizations. Possibly we could re-introduce these optimizations under a “fast-math“ sort of transformation if they are useful.
Visit Topic or reply to this email to respond.
To unsubscribe from these emails, click here.
Potentially I’m not following the concern here, apologies.
My understanding of the OP was the canonicalization log(exp(x)) → x would alter the numerical semantics of the graph under TOSA’s floating-point behaviour rules. If we don’t run canonicalization, the result is inf
but if canonicalization is run the result is x
.
This seems undesirable to me particularly for a canonicalization which I believe should be semantics-preserving, and might be better suited to a separate, opt-in, transformation with relaxed FP expectations.