I’m interested in adding a bitcast op to core dialects. There’s a bitcast in vector dialect, but it’s limited to vectors and will cast between element types of different bit widths (Original RFC). I’m thinking of something more like the scalar/elementwise ops that is more limited (and has more invariants) and would not do special magic with vectors (that additional functionality belongs in the vector dialect IMO).
This bitcast op would take an int or float like operand and cast it to an int or float like type of the same bit-width. For shaped types, the container type must be identical except for the element type and bitcast would operate elementwise. So these would be legal:
Where to put this: I think it belongs next to std.fpext and the like and therefore should go in standard for now and follow those ops if we come up with a new place for them.
My specific motivation: lowering mhlo.bitcast to linalg
I’d also love to see this as I was just looking for it today! There’s a bunch of useful conversion ops (sitofp/etc) in std and this feels like it can easily sit right beside them for now without invoking the whole “std shouldn’t exist” discussion.
Seems reasonable. This is basically a limited version of llvm’s bitcast instruction. Unless someone has a specific objection to the design of LLVM’s bitcast instruction (some poor decision that we don’t want to carry over and which falls into the subset of operations that @gcmn has scoped this down to), I think we should add this.
This seemed pretty uncontroversial, so I went ahead and sent ⚙ D105376 [MLIR][std] Introduce bitcast operation for review. If anyone has ideas for constants other than 0 that would be easy to test when bitcasting between float and integer representations, that would be appreciated
One thing I noticed is that none of the arithmetic cast ops seem to operate on tensors, only marking vectors of the same shape as cast compatible. This looks to me like an easy and obvious extension for these elementwise ops, so I may follow-up with that (though I don’t have a particular need for it at the moment)
One complication with standard ops on tensors is that one then also needs to define a way to bufferize them or to transform them into a dialect that has an existing lowering path to code generation, like e.g. linalg. But then the question is what value these operations provide in std if they have no lowering anyway.
bitcast is special because it has a trivial lowering to memref that does not require allocation.
I think this would just follow the typical elementwise to linalg lowering path and bottoming out on LLVM bitcast (allowing fusion/etc., perhaps having the bitcast operation happening in registers) – not all cases are necessarily profitable to be handled via a memref reinterpretation, as that implies having the value-to-bitcast materialized in memory.
I guess I am not convinced of the value to go via standard to linalg. I am likely biased as I have another tensor-level dialect to work with (hlo) and hence have no need. In my mind, standard (and math and its friends) are for manipulating scalars and vectors.
Just confirming that this conversation is tangential to the main RFC? (Not trying to shut down conversation but clarifying if there is actually a disagreement here) The patch implements this via the ArithmeticCastOp template, which gives it the ElementwiseMappable trait like its peers. I believe that the “typical” path that Sean refers to relates to various bits of frontend code that use this trait to make their lives easier, requiring fewer special cased conversions. I believe this particular way of looking at it was discussed when that trait was added but isn’t really linalg specific – just a way to use the trait.
I’m just replying to your characterization of this new op as a “complication”, which I disagreed with because it perfectly slots in with the (unused by anyone AFAIK, and could be deleted, but does work well) already-existing path from std elementwise ops on tensors to linalg. We do have a way to bufferize all of them, namely convert-elementwise-to-linalg,linalg-bufferize.
Whether we want to keep that path (and std elementwise on tensors altogether) is another question (and I tend to agree with you on reducing std/math to scalars and vectors only – I have found that going directly to linalg-on-tensors is perfectly adequate and I never materialize the std-elementwise-op-on-tensors).
Yes. I did not intend to block this and was under the assumptions it is going ahead. I was only commenting on
and wanted to raise the point that elementwise in std to support tensors adds complexity that is currently unused (and neither needed by @gcmn, @_sean_silva nor anyone else I am aware of).
It’s a good point. I think that if it is just a neat thing we can do but we don’t foresee a concrete use, for where the project is now, it is better to remove such things than leave them in the tree.
The reason I think it’s weird is that the arithmetic ops operate on tensors and the casts are declared as tensorizable. I’d be fine with standard not operating on tensors, but I think it’s weird that these casts don’t operate on tensors when other parts of standard do.
I agree and I’d vote for standard not operating on tensors at all. So instead of making it consistent into the “wrong” direction, we can leave it inconsistent or, even better, remove support for tensors all together.