Hi All,
I have a doubt regarding the behavior of LoopAccessAnalysis on incorrect #pragma omp simd with -fopenmp-simd flag.
How should the compiler behave if the #pragma omp simd on a loop is incorrect and can be proved by Loop Access Analysis.
Here is the sample code.
#pragma omp simd
for (dim_t p = 0; p < m; ++p)
#pragma unroll
for (dim_t i = 0; i < 6; ++i) {
{
(((r[i]).real)) += (((a[p + i * lda]).real)) * (((x[p]).real)) +
(((a[p + i * lda]).imag)) * (((x[p]).imag));
(((r[i]).imag)) += (((a[p + i * lda]).imag)) * (((x[p]).real)) -
(((a[p + i * lda]).real)) * (((x[p]).imag));
};
}
The specification on this loop is incorrect as the parallel_accesses metadata indicate that there is no loop carried memory dependence, which is not true in this case.
In the default flow, LICM hoists and sinks the loads and stores of r[i] and the loop vectorizer vectorizes this loop based on “llvm.loop.parallel_accesses” metadata.
If the hoist and sink transformation is prevented for some accesses for some reason in LICM, Loop vectorizer currently generates incorrect vector code without any warning. Although a check is being done in LoopAccessAnalysis.cpp to detect such cases (HasDependenceInvolvingLoopInvariantAddress), LAA does not warn if the “llvm.loop.parallel_accesses” metadata is present.
Is this expected ?
Shouldn’t the compiler not Vectorize if it can prove that there is a loop carried dependence and the Vectorizer will generate an incorrect code ?
Or Should it blindly follow the user directive (without a warning) ?
It is very difficult for the user to identify the real source of the problem if the compiler vectorizes the loop silently. I agree its hard to detect incorrect specifications. But for cases, where it is easy to detect we should atleast dump a warning.
I am attaching a sample input file on which loop vectorizer generates incorrect code.
run with : opt -loop-vectorize
Thanks,
Rajasekhar
r0.ll (45.1 KB)