[PATCH] integer: Update integer limits to comply with spec

The values for the char/short/integer/long minimums were declared with their actual values, not the definitions from the CL spec (v1.1). As a result, (-2147483648) was actually being treated as a long by the compiler, not an int, which caused issues when trying to add/subtract that value from a vector.

Update the definitions to use the values declared by the spec, and also add explicit casts for the char/short minimums so that the compiler actually treats them as shorts/chars. Without those casts, they actually end up stored as integers.

The compiler can sign extend the values if it needs to conver the char->short or short->int.

Reported-by: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
Signed-off-by: Aaron Watry <awatry@gmail.com>

Note: I’ve sent some updated piglit tests to that list, which can be reviewed here:
http://lists.freedesktop.org/archives/piglit/2015-September/017065.html

I verified that each of those individual tests failed with a compile error before the changes in this patch were made, and then I fixed each error individually with a retesting step between fixing each definition.

–Aaron

The values for the char/short/integer/long minimums were declared with their actual values, not the definitions from the CL spec (v1.1). As a result, (-2147483648) was actually being treated as a long by the compiler, not an int, which caused issues when trying to add/subtract that value from a vector.

Update the definitions to use the values declared by the spec, and also add explicit casts for the char/short minimums so that the compiler actually treats them as shorts/chars. Without those casts, they actually end up stored as integers.

The compiler can sign extend the values if it needs to conver the char->short or short->int.

Reported-by: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
CC: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
Signed-off-by: Aaron Watry <awatry@gmail.com>
---
generic/include/clc/integer/definitions.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/generic/include/clc/integer/definitions.h b/generic/include/clc/integer/definitions.h
index a407974..505002f 100644
--- a/generic/include/clc/integer/definitions.h
+++ b/generic/include/clc/integer/definitions.h
@@ -1,14 +1,14 @@
#define CHAR_BIT 8
#define INT_MAX 2147483647
-#define INT_MIN -2147483648
+#define INT_MIN (-2147483647 - 1)

Is there some reason why

#define INT_MIN ((int)-2147483648)

won't work?

I also don't understand how subtracting one guarantees the result will
be an integer and not long.

-Tom

Is there some reason why

#define INT_MIN ((int)-2147483648)

won't work?

This seems to work, at least no error is generated, but I guess it produces an signed overflow. The '-' is treated as unary operator and not as part of the number. Therefore first '2147483648' is evaluated but is to large for signed int (and is therefore treated as long without the explicit cast). Afterwards the number would be negated. (See [llvm-dev] [OpenCL] Implicit arithmetic conversion of INT_MIN to int vector type)

I also don't understand how subtracting one guarantees the result will
be an integer and not long.

The number 2147483647 is still valid for signed int and thus no conversion to long is implicitly performed. Afterwards it is negated and one is subtracted which leads to the correct value for INT_MIN without an overflow.

Moritz

> The values for the char/short/integer/long minimums were declared with
their actual values, not the definitions from the CL spec (v1.1). As a
result, (-2147483648) was actually being treated as a long by the
compiler, not an int, which caused issues when trying to add/subtract that
value from a vector.
>
> Update the definitions to use the values declared by the spec, and also
add explicit casts for the char/short minimums so that the compiler
actually treats them as shorts/chars. Without those casts, they actually
end up stored as integers.
>
> The compiler can sign extend the values if it needs to conver the
char->short or short->int.
>
> Reported-by: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
> CC: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
> Signed-off-by: Aaron Watry <awatry@gmail.com>
> ---
> generic/include/clc/integer/definitions.h | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/generic/include/clc/integer/definitions.h
b/generic/include/clc/integer/definitions.h
> index a407974..505002f 100644
> --- a/generic/include/clc/integer/definitions.h
> +++ b/generic/include/clc/integer/definitions.h
> @@ -1,14 +1,14 @@
> #define CHAR_BIT 8
> #define INT_MAX 2147483647
> -#define INT_MIN -2147483648
> +#define INT_MIN (-2147483647 - 1)

Is there some reason why

#define INT_MIN ((int)-2147483648)

won't work?

I also don't understand how subtracting one guarantees the result will
be an integer and not long.

You're right that ((int)-2147483648) would work as well, which is somewhat
similar to what I did with the char/short minimums in this patch (forcibly
casting the result). The only reason that I went with the value that I did
for INT_MIN was that the spec seems pretty specific:

Section 6.11.3 of the 1.1 Spec:
"The macro names given in the following list must use the values specified.
The values shall all
be constant expressions suitable for use in #if preprocessing directives."

<snip>
#define INT_MIN (-2147483647 – 1)

The only reason that I put an explicit cast in for the char/short values
was because they would not actually work correctly for charN/shortN casts
without them, and it seems to be the intention behind the spec that the
MIN/MAX of a given type should be stored as that type.

I don't personally care that much what we end up with, as the result after
preprocessing should be identical... as long as the new unit tests pass,
I'm good.

--Aaron

> > The values for the char/short/integer/long minimums were declared with
> their actual values, not the definitions from the CL spec (v1.1). As a
> result, (-2147483648) was actually being treated as a long by the
> compiler, not an int, which caused issues when trying to add/subtract that
> value from a vector.
> >
> > Update the definitions to use the values declared by the spec, and also
> add explicit casts for the char/short minimums so that the compiler
> actually treats them as shorts/chars. Without those casts, they actually
> end up stored as integers.
> >
> > The compiler can sign extend the values if it needs to conver the
> char->short or short->int.
> >
> > Reported-by: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
> > CC: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
> > Signed-off-by: Aaron Watry <awatry@gmail.com>
> > ---
> > generic/include/clc/integer/definitions.h | 12 ++++++------
> > 1 file changed, 6 insertions(+), 6 deletions(-)
> >
> > diff --git a/generic/include/clc/integer/definitions.h
> b/generic/include/clc/integer/definitions.h
> > index a407974..505002f 100644
> > --- a/generic/include/clc/integer/definitions.h
> > +++ b/generic/include/clc/integer/definitions.h
> > @@ -1,14 +1,14 @@
> > #define CHAR_BIT 8
> > #define INT_MAX 2147483647
> > -#define INT_MIN -2147483648
> > +#define INT_MIN (-2147483647 - 1)
>
> Is there some reason why
>
> #define INT_MIN ((int)-2147483648)
>
> won't work?
>
> I also don't understand how subtracting one guarantees the result will
> be an integer and not long.
>
>
You're right that ((int)-2147483648) would work as well, which is somewhat
similar to what I did with the char/short minimums in this patch (forcibly
casting the result). The only reason that I went with the value that I did
for INT_MIN was that the spec seems pretty specific:

Section 6.11.3 of the 1.1 Spec:
"The macro names given in the following list must use the values specified.
The values shall all
be constant expressions suitable for use in #if preprocessing directives."

<snip>
#define INT_MIN (-2147483647 – 1)

The only reason that I put an explicit cast in for the char/short values
was because they would not actually work correctly for charN/shortN casts
without them, and it seems to be the intention behind the spec that the
MIN/MAX of a given type should be stored as that type.

I don't personally care that much what we end up with, as the result after
preprocessing should be identical... as long as the new unit tests pass,
I'm good.

I don't mind using the values from the spec. The only change I would request
is that we cast INT_MIN to int like we do for char and short. I think the
compiler is still allowed to evaluate that expression to long if it wants to.

-Tom

>
> > > The values for the char/short/integer/long minimums were declared
with
> > their actual values, not the definitions from the CL spec (v1.1). As a
> > result, (-2147483648) was actually being treated as a long by the
> > compiler, not an int, which caused issues when trying to add/subtract
that
> > value from a vector.
> > >
> > > Update the definitions to use the values declared by the spec, and
also
> > add explicit casts for the char/short minimums so that the compiler
> > actually treats them as shorts/chars. Without those casts, they
actually
> > end up stored as integers.
> > >
> > > The compiler can sign extend the values if it needs to conver the
> > char->short or short->int.
> > >
> > > Reported-by: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
> > > CC: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
> > > Signed-off-by: Aaron Watry <awatry@gmail.com>
> > > ---
> > > generic/include/clc/integer/definitions.h | 12 ++++++------
> > > 1 file changed, 6 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/generic/include/clc/integer/definitions.h
> > b/generic/include/clc/integer/definitions.h
> > > index a407974..505002f 100644
> > > --- a/generic/include/clc/integer/definitions.h
> > > +++ b/generic/include/clc/integer/definitions.h
> > > @@ -1,14 +1,14 @@
> > > #define CHAR_BIT 8
> > > #define INT_MAX 2147483647
> > > -#define INT_MIN -2147483648
> > > +#define INT_MIN (-2147483647 - 1)
> >
> > Is there some reason why
> >
> > #define INT_MIN ((int)-2147483648)
> >
> > won't work?
> >
> > I also don't understand how subtracting one guarantees the result will
> > be an integer and not long.
> >
> >
> You're right that ((int)-2147483648) would work as well, which is
somewhat
> similar to what I did with the char/short minimums in this patch
(forcibly
> casting the result). The only reason that I went with the value that I
did
> for INT_MIN was that the spec seems pretty specific:
>
> Section 6.11.3 of the 1.1 Spec:
> "The macro names given in the following list must use the values
specified.
> The values shall all
> be constant expressions suitable for use in #if preprocessing
directives."
>
> <snip>
> #define INT_MIN (-2147483647 – 1)
>
> The only reason that I put an explicit cast in for the char/short values
> was because they would not actually work correctly for charN/shortN casts
> without them, and it seems to be the intention behind the spec that the
> MIN/MAX of a given type should be stored as that type.
>
> I don't personally care that much what we end up with, as the result
after
> preprocessing should be identical... as long as the new unit tests pass,
> I'm good.
>

I don't mind using the values from the spec. The only change I would
request
is that we cast INT_MIN to int like we do for char and short. I think the
compiler is still allowed to evaluate that expression to long if it wants
to.

Works for me. v2 incoming.