FW: GSL 'make check' failure

Hello Matthijs,

Actually I just checked and the problem with this bit of code is an unsigned
integer value for 'i'.

What I previously failed to show in the code snippet was that 'i' is
declared as type 'size_t'...which I'm guessing defaults to uint32 in this
case.

Instead of using 'size_t i;' I replaced it with a signed int and things
worked fine. But if I used an unsigned variable then things blew up. Here is
a failure table:

short pass
ushort pass

int pass
uint fail

long pass
ulong fail

long long pass
ulong long fail

Unsigned short doesn't blow up??? Hmmmm....

I am compiling using Ubuntu 2.6.24 with a Quad6600 Intel chip and svn
llvm/clang.

Thanks,
K.Wilson

P.S. This is not my code. It is the Gnu Scientific Library and it uses some
complex pure C (ie. function pointers and macro expansions that make things
harder to track). Very nicely written and an interesting testing
framework...just difficult to follow through with my quick perusal :wink:

For (i=0;i<N;i++)

{

  v->data[2*i] = ATOMIC(N - i);

  v->data[2*i + 1] = ATOMIC(10 * (N - i) + 1);

};

fwrite().

fclose().

free().

I've seen similar free errors after doing out-of-bounds write. Are you sure
that data is allocated long enough (2N elements)?

Gr.

Matthijs

Hello Matthijs,

Actually I just checked and the problem with this bit of code is an unsigned
integer value for 'i'.

What I previously failed to show in the code snippet was that 'i' is
declared as type 'size_t'...which I'm guessing defaults to uint32 in this
case.

Can you give a reduced testcase, i.e. something that I could actually
compile to reproduce the issue? If you have trouble reducing it, the
output of clang -E for the whole file would be fine. I'll try to
figure it out, but having actual code will make it a lot easier.

(That said, this looks a bit similar to a bug I fixed recently; are
you updated to trunk?)

Another thing that might help is a diff between the output of clang
-emit-llvm of the unsigned int version vs the output of clang
-emit-llvm for the int version. That would help narrow down the
problem.

Instead of using 'size_t i;' I replaced it with a signed int and things
worked fine. But if I used an unsigned variable then things blew up. Here is
a failure table:

short pass
ushort pass

int pass
uint fail

long pass
ulong fail

long long pass
ulong long fail

Unsigned short doesn't blow up??? Hmmmm....

My guess is that's because unsigned short promotes to int. Although,
it's not obvious to me why signed vs unsigned would result in
different code...

-Eli

Hello Eli,

I am using the newest version of LLVM/clang, as of today. The short does
look like it is being promoted...and the 'unsigned short' is promoted to
'signed int' instead of unsigned.

I have attached the diff between the 'signed int' and 'unsigned int'
versions of the test_complex_binary function when using -emit-llvm. I also
attached the complete test_complex_binary function.

This is the gist of the diff file:

< %cmp = icmp ult i32 %tmp, 1027 ; <i1> [#uses=1]

test.uint (6.87 KB)

diff (785 Bytes)

Hello Eli,

I am using the newest version of LLVM/clang, as of today. The short does
look like it is being promoted...and the 'unsigned short' is promoted to
'signed int' instead of unsigned.

I have attached the diff between the 'signed int' and 'unsigned int'
versions of the test_complex_binary function when using -emit-llvm. I also
attached the complete test_complex_binary function.

[snip]

So '(un)signed less than' and '(un)signed int to floating point' are the
only differences.

This function is hard to isolate because of all the macro expansions, so I
tried to set up a small test case yesterday, with no luck. Let me know if
you can make something of the attached information and if not I will try to
isolate something again.

Thanks,
K.Wilson

Mmm... I'll have to work on this a bit more. I actually tried
compiling gsl myself; I ran into a bug with LLVM codegen for fabs, but
that doesn't look like the issue you're running into. Everything seems
to work after hacking around that issue.

That said, I compiled with optimizations turned off (llvm-ld
-disable-opt); your issue might actually be caused by an LLVM
optimization pass.

-Eli

Hello Eli,

Yep, it is one of the optimizations for llvm-ld because everything is fine
when I turn them off :frowning:

And we know it is a problem with the LLVM codegen because everything works
fine when using -native-cbe.

Thanks,
K.Wilson

Found it; filed 2535 – Off-by-one bug with loop strength reduction of induction variable.

-Eli