[RFC] Pros and cons query of treating omp::TargetOp similarly to an mlir::FuncOp with an mlir::FunctionOpInterface

This isn’t quite an RFC, as it isn’t necessarily a change I am sure about without further community insight and input, more of a “Request for Information/Input” (RFI).

So, at the moment in the OpenMP dialect, we have an operation called a omp::TargetOp, which will eventually be lowered to a device kernel for execution on a target device or lowered to a host fall back kernel and some device kernel launch code for host.

The omp::TargetOp is marked as IsolatedFromAbove, has a single region, and it can contain other MLIR operations. In a lot of ways it is quite similar to a function/mlir::FuncOp, however, it obviously is quite distinct in a lot of ways due to the lowering.

Due to a recent issue where the omp::TargetOp was falling into a segment of code utilising a mlir::FunctionOpInterface but failing as it doesn’t support this interface currently (and cannot unless some adjustments are made I believe) I was curious as to:

  1. If implementing this interface for TargetOp seemed like a reasonable thing to do
  2. Any possible side affects or issues of doing this people may be aware of
  3. And the possible pros and cons list for this
  4. Any other problems or issues people can think of

Unfortunately, I am not the most savvy or knowledgeable about all the segments of MLIR and Flang’s MLIR infrastructure, hence the community question!

For the most part I imagine that adopting the FunctionOpInterface will opt TargetOp into a number of MLIR optimisation passes (current and future), which in certain cases may not be desirable, but in other cases might be. Which could lead to some difficult to track down issues. But on the other hand we may get a lot out of the passes we opt into doing this and it’ll be easier for the opt passes to handle TargetOp and not require special treatment for it.

I guess, what would be useful to make sure that MLIR (optimization) passes are aware enough of such things so that a missing interface is detected properly and does not simply ICE arbitrarily. I’m thinking of the equivalence of a visitor pattern, where the compiler will object if you do not implement visiting operations for all sub-types of the visited structure. Or, you know, like an switch-case where the compiler moans about you not having all enumeration values as case statements.

Just a small bump to give this discussion a little bit of final visibility, incase anyone has any further input to either myself or @mjklemm!

Could you describe this case? Particularly why did it go into that case? Is this in upstream code of MLIR/flang?

I would imagine adding the interface is for opting into passes (like inlining) that works on the FunctionOpInterface.

It was originally in upstream MLIR/Flang but has been fixed by a PR from @tblah (thank you very much again). The issue occurred in Flang’s AddAliasTags pass where a check was missing to detect that the parent of an entry block was actually implementing the FunctionOpInterface, which meant when used with a TargetOp the pass would continue onwards until it tried to utilise the FunctionOpInterface to access information required to generate alias information, as TargetOp doesn’t implement it, it ICE’d the compiler (@tblah may be able to describe the problem and fix better). My alternative fix made an attempt to provide the required information from TargetOp to create alias information.

The related PRs that contain some discussion on the issue:

The issue isn’t directly related to implementing the FunctionOpInterface (although, it is an option that would have fixed the issue and avoided it all together, whether or not the optimisation makes sense for TargetOp is another question). But it raised the question on what would be the potential downsides and upsides to implementing the FunctionOpInterface for omp::TargetOp!

Yes, that would be the idea, to treat omp::TargetOp more “function like” while retaining the option to opt out of passes as necessary

1 Like

Nice explanation. Another way to put it: previously the AddAliasTags pass mistook block arguments to the entry block of a TargetOp as function dummy arguments, and eventually fell over while trying to generate alias tags describing them as such.

The fix was to ensure that we always check that the parent operation implements FunctionOpInterface.