Question about selected_{int,real}_kind

Hi all,

We at BSC are internally testing flang (fir-dev for now) on some codebases of our interest, and one of the things we identified is that there is actually no runtime lowering for selected_{int,real}_kind.

Now, I could go and implement something like this, in runtime/transformational.cpp

td::int32_t RTNAME(SelectedIntKind)(std::int32_t precision) {
 if (precision <= 2)
   return 1;
 if (precision <= 4)
   return 2;
 if (precision <= 9)
   return 4;
 if (precision <= 18)
   return 8;
 if (precision <= 38)
   return 16;
 return -1;
}

But I wonder if we can do better, so to avoid future mismatches between the frontend capabilities and the runtime ones.

I found that there are SelectedIntKind and SelectedRealKind in lib/Evaluate/type.cpp but it seems inappropriate to use things in libFortranEvaluate from libFortranRuntime.

I wonder if it makes sense to move some of those common concepts shared between the compile-time evaluation and the runtime evaluation in another library so libFortranEvaluate and libFortranRuntime can use it (say libFortranTranslation in lack of a better name).

Perhaps the risk of the frontend and the compiler runtime misaligning is small and there is no need for this common library.

Thoughts?

Hi @rofirrim,
Yes, sharing this code as much as possible makes sense to me, and I agree libFortranRuntime should not depend on libFortranEvaluate (we do not wish to have the Fortran runtime depending on LLVM support libraries and C++ libraries).

Have you looked at llvm-project/real.h at main · llvm/llvm-project · GitHub ?

It may contain what you need and should be usable in both the runtime and and evaluate. Otherwise, that may just be the right place to share this code.

Hi @jeanPerier,

Thanks for the pointers! Looks like some of the code can be reused, yes.

One question I still have is whether there is some plan regarding modeling the target itself. Maybe there is already something usable that I have missed it.

The returned value of these intrinsics may change with the target and we want to keep in sync the compile-time evaluation (flang/lib/Evaluate) and the runtime evaluation (flang/runtime).

One can see this behaviour with gfortran already, for instance selected_real_kind(16, 1) returns 10 (387 80-bit fp) on x86_64 but returns 16 (float128) on AArch64. Now, I’m not saying we have to mimick gfortran (ifort returns 16) here but a similar issue may happen between float16_t and float32_t if a target may not want to support float16_t so we should never return kind=2.

Maybe it is more reasonable to support all the IEEE types and let the backends lower them (which may involve runtime calls like it happens with kind=16 already)?

Sorry if I’m derailing a bit the topic of the thread.

Currently the front-end approach is indeed to be target agnostic by supporting all kinds. Lowering relies on LLVM target triple when it matters, but we are yet to try any cross compilation.

However, the points you bring about selected_real_kind(16, 1) is interesting. The front-end is required to be able to fold this expression since it is a constant expression and it may be used to compute type kinds (e.g. real(selected_real_kind(16, 1)) :: x). Semantics needs to resolve the kind in those cases. So some knowledge about the target will have to make it in semantics too.