TL;DR: do we prefer x.dyn_cast<T>() or dyn_cast<T>(x) going forward?
Hi folks,
In a couple of my recent MLIR code reviews I was asked to switch from (1) x.dyn_cast<T>() to (2) dyn_cast<T>(x), where x is some MLIR handle type like Value/Type/Attribute/etc.
From what I gathered, style (1) was historically used by MLIR because of deficiencies in the LLVM casting infra (which have since been addressed: Updates to Casting.h), while LLVM has to use (2) because it uses pointers as handles to IR constructs.
Personally, I prefer (1) because it reads left-to-right (i.e., has a nice flow / continuation to it), but I can see the benefits of using (2) to keep things uniform in the whole llvm-project monorepo.
Should we prefer one over the other in new/modified MLIR code? Are there some ongoing efforts to migrate MLIR to (2)?
Not everything is/can be represented as a value type “handle”, so (2) is more consistent. As an example, when switching between Op and Operation*, (2) doesn’t change but (1) does.
Do you have some concrete timeline in mind for this? I imagine that like other refactorings, we’d first migrate all existing uses to the newly preferred style (2), mark the old style (1) as deprecated, and then delete (1) after N weeks?
Don’t have an exact timeline, but it would follow effectively what you suggest. I haven’t had time recently to push on it, but may find time soon. Happy to review patches though if others have desire.
I’ve started changing downstream code to use free functions and discovered an interesting quirk:
Prior to C++20, function templates don’t participate in ADL when the template arguments are explicitly specified, unless there is a function template with the same name visible in that scope.