[OpenCL] Implicit arithmetic conversion of INT_MIN to int vector type

Hello,

I recently came across an OpenCL kernel in which an int vector type was subtracted from the INT_MIN constant, e.g.

int2 v2 = INT_MIN - (int2)(0);

INT_MIN was defined as

#define INT_MIN (-2147483648)

Clang in OpenCL modes (-x cl) produces the following error:

vector_conversion.c:12:42: error: can't convert between vector values of different size ('long' and 'int2' (vector of 2 'int' values))
    int2 v_int = INT_MIN - (int2)(0); // Only error long to int2 conversion is not possible
                               ~~~~~~~ ^ ~~~~~~~~
1 error generated.

According to the OpenCL C standard (§6.2.6 and §6.3) I expected that the scalar INT_MIN would be expanded to the vector type and that the operation should be possible. The problem seems to be that INT_MIN is internally represented as long, though it is still a valid signed integer value, and thus long cannot be casted to int and subsequently not to int2.

If on the other hand INT_MIN is defined as

#define INT_MIN (-2147483647 - 1)

it is represented as int, although it is the same number, and clang does not produce an error.

More surprisingly, the expression

int2 t = (int2)(INT_MIN);

works for both versions of INT_MIN. So in this situation long seems to be implicitly casted to int which should not be possible.

Finally, for clang in C mode (-x c) also both version work and no error is produced. I attached a test case in which a summarised all my findings. The behaviour I observed should be reproducible with

$ clang -x cl -S -emit-llvm -O0 -o - vector_conversion.c
$ clang -x c -S -emit-llvm -O0 -o - vector_conversion.c

I would be interested in your thoughts about this behaviour, whether it is correct that -2147483648 is represented as long or whether the behaviour can be considered as bug in the OpenCL frontend of Clang. (Another possibility would be that the representation as long is correct but it is nevertheless a bug that the subtraction is not possible.)

Regards,

Moritz

vector_conversion.c (856 Bytes)

It depends on whether the ‘-’ is part of the syntax of a number, or is an operator applied afterwards. I believe C says it is an operator and thus it examines ‘2147483648’, decides it won’t fit into 32 bits and chooses a 64 bit type, and only afterwards applies the negation.

Testing with clang 3.4.1 on Ubuntu x86_64:

cat >intmin.cpp <<END

#include

int main(){
auto i = -2147483647;
auto j = -2147483648;
std::cout << "sizeof " << i << " = " << sizeof(i) << std::endl;
std::cout << "sizeof " << j << " = " << sizeof(j) << std::endl;
return 0;
}
END
clang++ -std=c++11 -o intmin intmin.cpp && ./intmin

sizeof -2147483647 = 4

sizeof -2147483648 = 8

g++ 4.8.4 gives the same result.

You would not notice this with an explicit type instead of ‘auto’.

Note that there should be CL_INT_MIN available (cl_platform.h) rather than defining your own.

This is an OpenCL kernel he is talking about - you don’t include cl_platform.h within kernel code, that is for host only. CL mandates that the INT_MIN preprocessor macro will be in existence, so the code is very much allowed.

Note how it is defined in the man pages here - Cheers, -Neil.

Thanks for the explanations.

I already noticed how the OpenCL C standard defines the macro but I was not sure whether there is a special reason for defining it as subtraction instead of using the number itself. The handling of the minus as unary operator explains why the number itself cannot be used.

I discovered this issue because the definition of INT_MIN in the libclc headers does not follow the OpenCL C standard and uses the number -2147483647 directly. This caused the compilation of some of my kernels to fail with the error of clang. I sent a separate email to the libclc developers and asked them to change the definition to be conferment with the standard and to avoid these problems.

Regards,

Moritz

To address the difference between C and OpenCL:

In OpenCL, we’re constrained by 6.2.6 "An error shall occur if any scalar operand has greater rank than the type of the vector element.”

For C, we deliberately adopted a more relaxed requirement, because the OpenCL rules are, quite frankly, infuriating in real-world vector code when dealing with integer types.

Your actual question: OpenCL C follows the C standard w.r.t. integer literals, so -2147483648 has type long and thus cannot be implicitly converted to intN (due to the rule I quoted above). Both C and CL are behaving correctly here; as you have already surmised, the bug is in the definition of INT_MIN.

– Steve