aarch64 status for generating SIMD instructions

I’m using Fedora 22 and gcc 4.9.2 to run llvm 3.5.1 on an ARM Juno reference box (cortex A53 & A57).

I tried compiling some simple functions like dot product and axpy() into assembly to see if any of the SIMD instructions were generated (they weren’t).

Perhaps I’m missing some compiler flag to enable it.

Does anyone know what the status is for aarch64 generating SIMD instructions?

Anyone coordinating or leading this effort? (if there is one)

Which compiler flags have you been using ?

There is definitely support for AArch64’s SIMD instructions, but their use depends on what the vectorizers can do with your code.

So far, all I have tried is –O3 and with & without “-mcpu=cortex-a57”.

I’m new to LLVM so I’m not familiar with what optimization flags are available.

I tried poking around in the LLVM documentation but haven’t found a definitive list.

The clang man page is skimpy on details.

You can try something along the lines of “-03 - mcpu=cortex-a57 –mfpu=neon –ffast-math”

Hi Ralph,

A bunch of useful options for vectorizers is listed in [1].

Also, what you see might be a target-independent issue, not an aarch64-specific. If you can share the code you tested I can try to explain why vectorizer fails to handle it, and hopefully we can fix it later:)

Thanks,
Michael

[1] http://llvm.org/docs/Vectorizers.html

% clang -S -O3 -mcpu=cortex-a57 -ffast-math -Rpass-analysis=loop-vectorize dot.c

dot.c:15:1: remark: loop not vectorized: value that could not be identified as

reduction is used outside the loop [-Rpass-analysis=loop-vectorize]

}

^

dot.c:15:1: note: could not determine the original source location for :0:0

I found “llvm-as < /dev/null | llc -march=aarch64 -mattr=help” which listed a bunch of features but when I tried

adding “-mfpu=neon” or “-mattr=+neon”, clang complained that the option was unrecognized.

dot.s (875 Bytes)

dot.c (215 Bytes)

From this message it looks like the vectorizer is having some general problems with the testcase. I’d suggest to try the simplest case for the beginning, just to make sure vectorizer works. Like this:
void foo(int *a, int *b, int *c) {
  int i;
  for(i = 0; i < 1000; i++) {
    a[i] = b[i] + c[i];
  }
}

If you compile it with ‘clang -O3 -arch arm64 -S’, you should see the SIMD instructions. If you do see them, it means that your original test is too complicated for the vectorizer right now (that might be due to some bug) - feel free to file a bug.

Thanks,
Michael

I just found that you attached the testcase.

The reason vectorizer fails on it is that there are three induction variables (i, ix, iy), and vectorizer doesn’t know about their strides. If you, for instance, replace inc_x and inc_y with ‘1’, the loop will be vectorized.

Thanks,
Michael

PS: The diagnostics is really confusing here.

Better. With this test I see:

% clang -S -O3 -Rpass=loop-vectorize test.c

test.c:3:3: remark: vectorized loop (vectorization factor: 4, unrolling

interleave factor: 2) [-Rpass=loop-vectorize]

for(i = 0; i < 1000; i++) {

^

% clang -S -O3 -o test1.s –mcpu=cortex-a57 -Rpass=loop-vectorize test.c

test.c:3:3: remark: vectorized loop (vectorization factor: 4, unrolling

interleave factor: 4) [-Rpass=loop-vectorize]

for(i = 0; i < 1000; i++) {

^

Both use SIMD instructions.

Changing the code to use a variable for the loop limit works OK as well as changing int to float.

So I guess it is the return in dot.c that is causing a problem.

I will file a bug since I think the vectorizer should handle that case.

Hi Ralph,

Thanks for the report! The reason we can’t vectorize
float foo(float *b, float *c) {
  int i;
  float v = 0.0;
  for(i = 0; i < 1000; i++) {
    v += b[i] + c[i];
  }
  return v;
}

is that ‘-ffast-math’ flag wasn’t specified. If you pass this flag, the loop gets vectorized.

It’s needed because vectorization here would change the order of operands in the sum, and that’s illegal if fast-math flag is not set.

The original expression here would be : v = (b[0] + c[0]) + (b[1] + c[1]) + … + (b[999] + c[999]),
while in the vectorized expression we would have: v = ((b[0] + c[0]) + (b[4] + c[4]) + … + (b[996] + c[996])) + ((b[1] + c[1]) + (b[5] + c[5]) + (b[997] + c[997]) + (…) + (…)

These expressions are not equivalent due to implicit rounding in each expression.

Best regards,
Michael