Segmentation fault libomptarget.so

Hi,

I’m trying openmp offloading to gpu and I’m getting a segfault when trying to run a test code. I followed the instructions explained here https://www.hahnjo.de/blog/2018/10/08/clang-7.0-openmp-offloading-nvidia.html.

Here is the piece of code I’m trying to run:

#include <malloc.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char* argv[])
{
if (argc != 2)
{
printf(“Usage: %s \n”, argv[0]);
return 0;
}

int n = atoi(argv[1]);

double* x = (double*)malloc(sizeof(double) * n);
double* y = (double*)malloc(sizeof(double) * n);

double idrandmax = 1.0 / RAND_MAX;
double a = idrandmax * rand();
for (int i = 0; i < n; i++)
{
x[i] = idrandmax * rand();
y[i] = idrandmax * rand();
}
printf(“Here\n\n”);
#pragma omp target
#pragma omp parallel for
{
for (int i = 0; i < n; i++)
y[i] += a * x[i];
}

double avg = 0.0, min = y[0], max = y[0];
for (int i = 0; i < n; i++)
{
avg += y[i];
if (y[i] > max) max = y[i];
if (y[i] < min) min = y[i];
}

printf(“min = %f, max = %f, avg = %f\n”, min, max, avg / n);

free(x);
free(y);

return 0;
}

I’m compiling the code like this: clang -fopenmp -fopenmp-targets=nvptx64 -O2 example.c
When I run I get a Segmentation fault immediately.

Any thoughts?

Thanks!
Talita

Thanks for your quick response Alexey. I’ve just tried what you suggested and I’m still getting the same error.

Best,
Talita

Just by looking quickly at your compiler line, can you use the full triple:

-fopenmp-targets=nvptx64-nvidia-cuda

Thanks,

–Doru

Tried all the above… still segfault on my end. BTW, I don’t know if that’s relevant but I’m using cuda 9.2.

This fixed it for me:

#pragma omp target map(tofrom: y[:n]) map(to:x[:n])
#pragma omp parallel for
for (int i = 0; i < n; i++)
y[i] += a * x[i];

–Doru

Hi,

I’m attaching the cmake logs for both llvm and openmp.

Thanks,
Talita

llvm_cmake_line.txt (177 Bytes)

llvm_CMakeError.log (98.6 KB)

llvm_CMakeOutput.log (254 KB)

openmp_CMakeError.log (12 KB)

openmp_CMakeOutput.log (109 KB)

openmp_cmake_line (263 Bytes)

Hi,

Did any of you guys have the chance to take a look at this?

Thanks!
Talita

No worries! Thanks so much!

Talita

I tried it with the 7.0 source as your described in the attachment. I did not encounter the problem. The only difference in my openmp runtime build is to use gcc on the system instead of clang. Which version of clang did you use to build openmp runtime? Which OS is the build on?

I assume that you updated the example program based on Alexey’s suggestion and still get the seg fault.

Kelvin

Hi Kelvin,

I see… in my case I used the clang 7 I compiled to build the openmp runtime. OS info:

Ubuntu
VERSION=“16.04.5 LTS (Xenial Xerus)”
Kernel: 4.4.0-141-generic

I could try rebuilding the openmp runtime with gcc instead. Which gcc version did you use?

Thanks,
Talita

Hi Talita,

What does ldd show for your executable? Is it linked to the libomp.so
you built? Or is it linked to something like libomp.so.5 in a system
directory?

Joel

It is GCC 5.4.

$ gcc --version
gcc (Ubuntu/IBM 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609

Kelvin

Oops, sorry. It should be

$ gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11)

Kelvin

Hi Alexey,

I apologize I didn’t reply before, it was a busy week. I do have good news though! I’ve just replaced my workstation, and in doing so, I had to recompile everything from scratch. I now have CUDA 10 and I basically went through the same procedure as before, but I now used clang 8. Everything worked just fine. Just for the record, I’ll put my system details and the software versions I’m using here:

Ubuntu 16.04, kernel 4.4.0-142-generic
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0

CUDA 10, driver 410.93

Compiled codes:
https://github.com/llvm-mirror/compiler-rt/tree/release_80

https://github.com/llvm-mirror/clang/tree/release_80

https://github.com/llvm-mirror/openmp/tree/release_80

https://github.com/llvm-mirror/llvm/tree/release_80

At some point, I should try to rebuild the compiler in the system I was using before to maybe figure out whatever problem I was having. But for now, I’m happy this is working. Thank you for all the help!

Best,
Talita