LAA behavior on Incorrect #pragma omp simd.

Hi All,
I have a doubt regarding the behavior of LoopAccessAnalysis on incorrect #pragma omp simd with -fopenmp-simd flag.
How should the compiler behave if the #pragma omp simd on a loop is incorrect and can be proved by Loop Access Analysis.

Here is the sample code.

#pragma omp simd

for (dim_t p = 0; p < m; ++p)

#pragma unroll

for (dim_t i = 0; i < 6; ++i) {
{
(((r[i]).real)) += (((a[p + i * lda]).real)) * (((x[p]).real)) +
(((a[p + i * lda]).imag)) * (((x[p]).imag));
(((r[i]).imag)) += (((a[p + i * lda]).imag)) * (((x[p]).real)) -
(((a[p + i * lda]).real)) * (((x[p]).imag));
};
}

The specification on this loop is incorrect as the parallel_accesses metadata indicate that there is no loop carried memory dependence, which is not true in this case.

In the default flow, LICM hoists and sinks the loads and stores of r[i] and the loop vectorizer vectorizes this loop based on “llvm.loop.parallel_accesses” metadata.

If the hoist and sink transformation is prevented for some accesses for some reason in LICM, Loop vectorizer currently generates incorrect vector code without any warning. Although a check is being done in LoopAccessAnalysis.cpp to detect such cases (HasDependenceInvolvingLoopInvariantAddress), LAA does not warn if the “llvm.loop.parallel_accesses” metadata is present.

Is this expected ?

Shouldn’t the compiler not Vectorize if it can prove that there is a loop carried dependence and the Vectorizer will generate an incorrect code ?
Or Should it blindly follow the user directive (without a warning) ?

It is very difficult for the user to identify the real source of the problem if the compiler vectorizes the loop silently. I agree its hard to detect incorrect specifications. But for cases, where it is easy to detect we should atleast dump a warning.

I am attaching a sample input file on which loop vectorizer generates incorrect code.
run with : opt -loop-vectorize

Thanks,
Rajasekhar

r0.ll (45.1 KB)

Hi Rajasekhar,

thanks for reporting this.

The specification on this loop is incorrect as the parallel_accesses
metadata indicate that there is no loop carried memory dependence, which is
not true in this case.

First, I think the lowering is actually broken if a simdlen is given.
Since we use parallel_accesses metadata it indicates the loop is free of
dependences but #pragma simd simdlen(4) means we are allowed to assume
there are no loop carried dependences of length smaller than 4.

Shouldn't the compiler not Vectorize if it can prove that there is a loop
carried dependence and the Vectorizer will generate an incorrect code ?
Or Should it blindly follow the user directive (without a warning) ?

I'm always on the fence when it comes to these questions. I think we
should blindly follow the directives but offer a flag that globally
turns on warning for such odd situations.

It is very difficult for the user to identify the real source of the
problem if the compiler vectorizes the loop silently. I agree its hard to
detect incorrect specifications. But for cases, where it is easy to detect
we should atleast dump a warning.

No default warning, that will clutter the output. On second though,
maybe if the we determine the given information is plain wrong.

Cheers,
  Johannes

As an alternative, we can add an option (disabled by default, because the behavior does not meet the standard) to emit the OpenMP simd loops in analysis+vectorization (hint) mode.