Using mlir::DenseElementsAttr::getValues with f16

Hi Folks:

I am able to successfully create ops of different tensor types e.g. F32Tensor. But I run into a problem with BuiltinAttributes when trying to do so for f16.

One of the instruction for my dialect looks like this :

%half = "myDialect.constant"() {value = dense<[-1.0, -0.0, 0.0, 1.0]>
         : tensor<4xf16>} : () -> tensor<4xf16>`

In my

def ConstantOp : MyDialect_Op<"constant", [NoSideEffect, ConstantLike]> {
  let results = (outs F16Tensor);
  let arguments = (ins ElementsAttr:$value);

Then in my backend I do something like this -

    const auto valueAttr = op->getAttrOfType<mlir::DenseElementsAttr>("value");
    const auto type = valueAttr.getType().getElementType();
     if (type.isF16()) {
        std::vector<uint16_t> values;
        for (const auto value : valueAttr.getValues<uint16_t>()) {

The problem seems to be the line valueAttr.getValues<uint16_t>()). There is assertion failure in isValidIntOrFloat as shown below. I tried using uint16_t or float but now not sure how exactly this is intended to be used for f16 type.

mlir/IR/BuiltinAttributes.h:844: llvm::iterator_range<mlir::DenseElementsAttr::ElementIterator > mlir::DenseElementsAttr::getValues() const [with T = short unsigned int; = void]: Assertion `isValidIntOrFloat(sizeof(T), std::numeric_limits::is_integer, std::numeric_limits::is_signed)’ failed.

Thank you so much for any help on this.
Best Regards

You cannot use getValues<uint16_t>() as it would expect the type to be i16. The simplest solution get the fp16 in binary representation is to get an APFloat value by casting valueAttr to DenseFPElementsAttr and call getFloatValues(). Once you have the APFloat you can call bitcastToAPInt().getZExtValue() to get the binary representation.

The asserts are intended to provide a general guard against unexpected type conversions for the native types (i.e. some people may expect an implicit conversion as opposed to a raw reinterpret of the held data). You can however, define your own placeholder data type to represent half that can be used directly with DenseElementsAttr. The only thing technically required is a specialization of std::numeric_limits that sets is_specialized (and not is_integer). This is what TensorFlow does (for example):

– River

Thanks Thomas.

I was able to make progress and compile successfully with your suggestion. The resulting value is however all 0 for some reason. This is what i do now:

const auto valueAttr = op->getAttrOfType<mlir::DenseElementsAttr>("value");
for (const llvm::APFloat v : valueAttr.getFloatValues()) {
        const float value = v.convertToFloat(); 
        std::cout << "value = " << value << std::endl;

With an op like this -

%half = "myDialect.constant"() {value = dense<[-1.1, 1.0, 0.1, 1.1]> : tensor<4xf16>} : () -> tensor<4xf16>

Above prints out -
value = 6.75846e-41
value = 2.15239e-41
value = 1.66446e-41
value = 2.16669e-41

Thanks again

I think your code is correct, I put it in a pass:

void runOnFunction() override {
    getFunction().walk([](Operation *op) {
      const auto valueAttr =
      if (valueAttr) {
        for (const llvm::APFloat v : valueAttr.getFloatValues()) {
          const float value = v.convertToFloat();
          printf("value = %f\n", value);

gives me:

value = -1.099609
value = 1.000000
value = 0.099976
value = 1.099609
module  {
  func @f16() -> tensor<4xf16> {
    %0 = "myDialect.constant"() {value = dense<[-1.099610e+00, 1.000000e+00, 9.997550e-02, 1.099610e+00]> : tensor<4xf16>} : () -> tensor<4xf16>
    return %0 : tensor<4xf16>