[LLVM] (RFC) Addition/Support of new Vectorization Pragmas in LLVM

Hello all,

We are students from Indian Institute of Technology(IIT), Hyderabad, we would like to propose the addition of the following pragmas in LLVM that aide in (or possibly increase the scope of) vectorization in LLVM (in comparison with other compilers).

  1. ivdep

  2. Nontemporal

  3. [no]vecremainder

  4. [no]mask_readwrite

  5. [un]aligned

Could you please check the following Google document for the semantic description of these pragmas:

https://docs.google.com/document/d/1YjGnyzWFKJvqbpCsZicCUczzU8HlLHkmG9MssUw-R1A/edit?usp=sharing

It would be great if you could please review the above document and suggest us on how to proceed further (either about the semantics, or, about the code sections in LLVM).

Thank you

Yashas, Happy, Sai Praharsh, and Bhavya

B.Tech 3rd year, IITH.

Hi,

First, as a high-level note, you posted a link to a Google doc, and at the end of the Google doc, you have a list of questions that you’d like answered. In the future, please put the questions directly in the email. For one thing, more people will read your email than will open your Google doc. Second, having the questions in the email should allow a better threading structure to the replies.

  • Ivdep: Is clang loop vectorize(assume_safety) equivalent to ivdep? To what extent do the semantics of ivdep need to be modified for Clang to create an equally “useful pragma”? To what extent would it be helpful to have this pragma in Clang?

  • Nontemporal:What kind of analysis can we do in LLVM to find where to use nontemporal accesses? Any help would be greatly appreciated.

  • vecremainder/novecremainder: Should the pragma simply call the vectorizer to attempt to vectorize the remainder loop, or should the vectorizer use a different method?

  • mask_readwrite/nomask_readwrite: Is it a good idea to implement a pragma that will generate mask intrinsics in the IR? What other architectures (except x86) has support for masked read/writes?

Reference:https://llvm.org/devmtg/2015-04/slides/MaskedIntrinsics.pdf

LLVM has mask intrinsics for targets with AVX, AVX2, AVX-512.

From Slides: ”Most of the targets do not support masked instructions, optimization of instructions with masks is problematic, avoid introducing new masked instructions into LLVM IR”

  • aligned/unaligned: Is it worthwhile to have LLVM specific pragma rather depending on OpenMP?

-Hal

Hi,

First, as a high-level note, you posted a link to a Google doc, and at the end of the Google doc, you have a list of questions that you'd like answered. In the future, please put the questions directly in the email. For one thing, more people will read your email than will open your Google doc. Second, having the questions in the email should allow a better threading structure to the replies.

  * Ivdep: Is clang loop vectorize(assume_safety) equivalent to ivdep? To what extent do the semantics of ivdep need to be modified for Clang to create an equally “useful pragma”? To what extent would it be helpful to have this pragma in Clang?

There is a fundamental problem with the way that ivdep is defined by Intel's current documentation, at least for C/C++. As you note in your Google doc, it essentially says that the optimizer may ignore loop-carried dependencies except for those dependencies it can definitely prove are present. These are not semantics that any other compiler can actually replicate, and is not equivalent to "vectorize(assume_safety)" (which asserts that no loop-carried dependencies are present). The good news is that, in conversations I've had with Intel, an openness to making these semantics more concrete has been expressed. I think it would be very useful to have ivdep in Clang, but only after we nail down the semantics with Intel is some useful way.

Hi,

First, as a high-level note, you posted a link to a Google doc, and at the end of the Google doc, you have a list of questions that you’d like answered. In the future, please put the questions directly in the email. For one thing, more people will read your email than will open your Google doc. Second, having the questions in the email should allow a better threading structure to the replies.

  • Ivdep: Is clang loop vectorize(assume_safety) equivalent to ivdep? To what extent do the semantics of ivdep need to be modified for Clang to create an equally “useful pragma”? To what extent would it be helpful to have this pragma in Clang?

There is a fundamental problem with the way that ivdep is defined by Intel’s current documentation, at least for C/C++. As you note in your Google doc, it essentially says that the optimizer may ignore loop-carried dependencies except for those dependencies it can definitely prove are present. These are not semantics that any other compiler can actually replicate, and is not equivalent to “vectorize(assume_safety)” (which asserts that no loop-carried dependencies are present). The good news is that, in conversations I’ve had with Intel, an openness to making these semantics more concrete has been expressed. I think it would be very useful to have ivdep in Clang, but only after we nail down the semantics with Intel is some useful way.

To be fair, IVDEP most likely originated at Cray. [Or maybe Control Data. The history is fuzzy that far back. I do know it predates ANSI C.]

There’s a publicly available copy of the Cray C/C++ manual here:

https://pubs.cray.com/content/S-2179/9.0/cray-classic-c-and-c+±reference-manual/vectorization-directives

Scott Manley from Cray would be good resource to tap for clarification on the semantics.

There is a fundamental problem with the way that ivdep is defined by Intel’s current documentation, at least for C/C++. As you note in your Google doc, it essentially says that the optimizer may ignore loop-carried dependencies except for those dependencies it can definitely prove are present. These are not semantics that any other compiler can actually replicate, and is not equivalent to “vectorize(assume_safety)” (which asserts that no loop-carried dependencies are present). The good news is that, in conversations I’ve had with Intel, an openness to making these semantics more concrete has been expressed. I think it would be very useful to have ivdep in Clang, but only after we nail down the semantics with Intel is some useful way.

Agreed. I don’t see a lot of value in having the compiler override a pragma that is supposed to override the compiler :slight_smile: Cray’s IVDEP really means what the documentation says: Ignore Vector DEPendencies. It doesn’t remove all dependencies, just dependencies that inhibit vectorization. It also does not force vectorization. If it’s not possible or not profitable to vectorize, then it won’t vectorize.

I will add that ivdep is well used by Cray and its users, so I’d like to see it well defined in Clang/llvm.

Thanks, Scott.

Regarding this:

It doesn’t remove all dependencies, just dependencies that inhibit vectorization.

This matches what Cray’s manual says, but I’m also not sure how to interpret this statement. Does that means that the dependencies ignored are dependent on the selected target? I’m a bit worried that the dependencies interesting for vectorization might change over time or depend on the hardware being targeted.

Can you please take a look at the way that Intel’s Fortran manual defines ivdep (https://software.intel.com/en-us/fortran-compiler-developer-guide-and-reference-ivdep) and say whether those semantics would also make sense for Cray’s implementation?

I believe our consensus view is that the semantics of these kinds of pragmas should be specified such that we could create a sanitizer which checks their dynamic semantic correctness independent of what the optimizer is actually capable of exploiting.

-Hal

This matches what Cray’s manual says, but I’m also not sure how to interpret this statement. Does that means that the dependencies ignored are dependent on the selected target? I’m a bit worried that the dependencies interesting for vectorization might change over time or depend on the hardware being targeted.

No, we don’t consider the target with regards to ivdep – but I’ll admit I don’t know what hardware might do in the future :slight_smile:

Perhaps we could look at a classic vector dependency issue in what Cray calls a vector update (I believe Intel refers to it as a histogram) – a[idx[i]] = a[idx[i]] + b[i] as an example? Some targets can vectorize this and thus isn’t technically a dependency issue for those targets, but ivdep can still play a role here. Without ivdep, you can still safely vectorize this on Skylake but it requires a particular sequence of instructions to resolve properly. With ivdep, we can simply generate a gather/scatter. I imagine other vector dependency issues might benefit from a similar user driven choice on hardware that could possibly “resolve” some of the dependency problems.

Can you please take a look at the way that Intel’s Fortran manual defines ivdep (https://software.intel.com/en-us/fortran-compiler-developer-guide-and-reference-ivdep) and say whether those semantics would also make sense for Cray’s implementation?

Their semantics are certainly cover at least part of Cray’s ivdep. I did try a few examples that vectorize with Cray’s ivdep using icc and wasn’t sure if some of their decisions were due to or in spite of ivdep, so I need to dig into that more. We’ll put together a list of what we do with IVDEP and see if they are all covered under that wording.

Cheers,

Scott

We’ll put together a list of what we do with IVDEP and see if they are all covered under that wording.

Thanks, that will be helpful.

-Hal

HAPPY Mahto via llvm-dev <llvm-dev@lists.llvm.org> writes:

2 Nontemporal

Is this a hint or a command? If it's a command then this would
implicitly specify the data is aligned on some targets (e.g. Intel X86).
I'm not sure we want to make that implicit assumption as it is very easy
for the programmer to get this wrong.

                      -David

There is a fundamental problem with the way that ivdep is defined by Intel’s current documentation, at least for C/C++. As you note in your Google doc, it essentially says that the optimizer may ignore loop-carried dependencies except for those dependencies it can definitely prove are present. These are not semantics that any other compiler can actually replicate, and is not equivalent to “vectorize(assume_safety)” (which asserts that no loop-carried dependencies are present). The good news is that, in conversations I’ve had with Intel, an openness to making these semantics more concrete has been expressed. I think it would be very useful to have ivdep in Clang, but only after we nail down the semantics with Intel is some useful way.

Agreed. I don’t see a lot of value in having the compiler override a pragma that is supposed to override the compiler :slight_smile: Cray’s IVDEP really means what the documentation says: Ignore Vector DEPendencies. It doesn’t remove all dependencies, just dependencies that inhibit vectorization. It also does not force vectorization. If it’s not possible or not profitable to vectorize, then it won’t vectorize.

+1

This one is particularly useful because some compilers implement “omp-simd” as ignore the cost model and vectorize unconditionally, so it is really useful in C/C++ code to be able to provide a weaker statement to the compiler. I disagree with the strong interpretation of the OpenMP standard but am not willing to quit my job over it :wink:

I will add that ivdep is well used by Cray and its users, so I’d like to see it well defined in Clang/llvm.

51K references on GitHub (https://github.com/search?q=pragma+ivdep&type=Code) suggest it is widely used beyond the Cray compiler.

Jeff

I think it has to be a hint. If it is a command, what is it’s meaning on non-x86 processors where write-through and write-back are controlled in different ways (or are just uncontrollable)?

For example, some PPC set cache write back/through at the page level (https://www.nxp.com/docs/en/data-sheet/MPC603.pdf). Would the command implementation have to try to set the page properties to do as the user directed?

There are also cases where the compiler may know that the user is often wrong about the utility of non-temporal memory access and ignoring it is an effective optimization. This is potentially relevant to profile-guided optimization.

Jeff

vecremainder/novecremainder: Should the pragma simply call the vectorizer to attempt to vectorize the remainder loop, or should the vectorizer use a different method?

Something like that. There were patches posted at some point to enable tail-loop vectorization. At this point, I imagine that you’d construct a VPlan with the vectorized tail.

Yep, committed in https://reviews.llvm.org/rL366989 and https://reviews.llvm.org/D65197.

The pragma name is different, but I think it tries to achieve the same thing.

If I understand Intel's documentation correctly, these are different things:

vectorize.predicate.enable: Do not create an epilogue loop (use masked
instructions in the main loop instead)
vecremainder: If there is an epilogue loop, vectorize it as well
(which will require masked instructions in the epilogue, but not in
the main loop)

Michael

Ah yes, not exactly the same things, thanks for clarifying.