Hello Everyone,
I’ve just committed http://reviews.llvm.org/D11821 which introduces a check-libomp target which uses llvm-lit to test the newly built OpenMP runtime library.
If (When) you have problems please leave feedback.
Basics to get it working in out-of-LLVM-tree builds:
Have llvm-lit in your PATH or specify it via –DLIBOMP_LLVM_LIT_EXECUTABLE=path/to/llvm-lit
To run: make check-libomp
– Johnny
Hello Everyone,
I’ve just committed http://reviews.llvm.org/D11821 which introduces a
check-libomp target which uses llvm-lit to test the newly built OpenMP
runtime library.
If (When) you have problems please leave feedback.
Jonathan,
As I mentioned in http://reviews.llvm.org/D11821, the current
default tolerances of...
return ((measured_time > 0.99 * wait_time) && (measured_time < 1.01 *
wait_time)) ;
cause the runtime/test/api/omp_get_wtime.c testcase to fail its
execution test on OS X 10.10 and 10.11 but not on 10.9. Execution
of the test case repeatedly on a dual quad-core MacPro 3,1 produces
exit codes ranging from 2 to 6. Increasing this tolerance to...
return ((measured_time > 0.98 * wait_time) && (measured_time < 1.02 *
wait_time)) ;
results in the exit code ranging from 0 to 1. Only increasing the
tolerances to...
return ((measured_time > 0.97 * wait_time) && (measured_time < 1.03 *
wait_time)) ;
cause the test case to always return the exit code of 0.
Where exactly did you come up with this 1% timer tolerance and
is it part of a particular OpenMP specification? If so, I'll open a
radar bug report against OS X 10.11 on this issue. Thanks in advance
for any clarification.
Jack