Proposal: Introduce memory comparison intrinsics

Hello everyone.

I would like to introduce new intrinsics for memory comparison:

memcmp - an equivalent of libc’ memcmp,

bcmp - an equivalent of libc’ bcmp,

memcmp.element.unordered.atomic - an element atomic version of memcmp,

bcmp.element.unordered.atomic - an element atomic version of bcmp.

Currently there exist some optimizations for memcmp/bcmp libc calls.

We would like to have these optimizations for element atomic comparisons (atomicity permitted).

I suggest we rewrite the existing optimizations to work with on new intrinsics and transform

memcmp/bcmp libc calls to the corresponding intrinsics. This is similar to what we do with

memcpy library calls.

Having these optimizations work on intrinsics and not on recognized libc calls

will allow us to share some existing transforms between atomic and non-atomic variants.

I propose the following plan for introducing the new intrinsics:

  1. Introduce non-atomic memcmp and bcmp intrinsics.

  2. Reimplement existing transforms for non-atomic memcmp intrinsic,

the same way as it’s done for memcpy.

  1. Introduce atomic intrinsics and reuse the optimizations.

Please express your concerns about this.

Dmitry

Hello everyone.

I would like to introduce new intrinsics for memory comparison:

memcmp - an equivalent of libc' memcmp,
bcmp - an equivalent of libc' bcmp,
memcmp.element.unordered.atomic - an element atomic version of memcmp,
bcmp.element.unordered.atomic - an element atomic version of bcmp.

Currently there exist some optimizations for memcmp/bcmp libc calls.
We would like to have these optimizations for element atomic comparisons (atomicity permitted).

Could you elaborate on the specific signatures these intrinsics would have?

llvm.memcpy and friends exist because we want to capture additional semantics beyond what the memcpy signature does - notably alignment information. What is all the additional information are you planning to capture for these?

-Chris

I propose they have signatures similar to non-atomic/atomic llvm.memcpy:
llvm_i32_ty llvm.memcmp (llvm_anyptr_ty %lhs, llvm_anyptr_ty %rhs, llvm_anyint_ty %length, llvm_i1_ty %is_volatile),
llvm_i32_ty llvm.bcmp (llvm_anyptr_ty %lhs, llvm_anyptr_ty %rhs, llvm_anyint_ty %length, llvm_i1_ty %is_volatile),
llvm_i32_ty llvm.memcmp.element.unordered.atomic (llvm_anyptr_ty %lhs, llvm_anyptr_ty %rhs, llvm_anyint_ty %length, llvm_i32_ty %element_size),
llvm_i32_ty llvm.bcmp.element.unordered.atomic (llvm_anyptr_ty %lhs, llvm_anyptr_ty %rhs, llvm_anyint_ty %length, llvm_i32_ty %element_size).

Having such signatures, it’s easy to transform the libc calls to the new intrinsics.

Our main motivation for having these intrinsics is that we want to have an atomic memory comparison semantics, which we can’t express through a libc call.
And if we are to add an atomic memcmp intrinsic, it’d natural to transform libc calls to intrinsics and we’d have some common logic work for all these functions.

Dmitry

I’m personally in favor of having consistent intrinsics for all libc memory operations.

Beware that C doesn’t guarantee 4 bytes int (could be 2 bytes), same for the count parameters (size_t could be 4 or 8 bytes) so the signature is target dependent.

Glancing at the in-tree usage, it looks like we have decent support for optimizing and lowering existing calls to bcmp/memcpy, but very little in the way of pattern matching formation. Are you planning on extending the matching pieces? Or is the primary intent to be able to share lowering code for the atomic invariants?

One thing I note is that glancing at existing code, it looks like not all targets support bcmp or memcmp. Given that, any intrinsic formation is going to have to remain dependent on the appropriate TLI checks. That’s slightly odd, but not a show stopper.Â

I would find this proposal more compelling if you could show benefit to the existing lowering/transformations by introducing the non-standard signatures. I don’t see any obvious ways to do so, but maybe give that some thought?

The major alternative to this proposal would be to simply add two new libfuncs for the atomic variants of bcmp/memcpy, and then configure them to be not-present on most targets. This would allow you to reuse the lowering code - which I do think is entirely reasonable for upstream - without the need for the intrinsics.

Overall, I think this proposal is reasonable. I’m not strongly in support given the ease of the libfunc approach, but I don’t really see any serious downsides to it either.

Philip

Is there a reason to have both bcmp and memcmp forms of this? The only different appears to be the nature of the result value, would that be better modeled as an immarg parameter?

-Chris

Our main motivation is to have an atomic variant of memcmp. We discussed extending the pattern matching and we may do it sometime later, but it’s not the first on our list.

Regarding non-standard signatures. I think it’d be worth to have an ‘isVolatile’ flag for non-atomic memcmp (like for memcpy). We cannot pass this flag to a lib function call, so your suggested approach with introducing new libfuncs wouldn’t work.
Also introducing the memcmp intrinsic would make the code more consistent and clearer to understand. memcpy, memset, memmove already have intrinsics and we can share some intrinsics optimization. E.g., we could hoist memcpy and memcmp of invariant arrays out of the loop using a generalized code for memory intrinsics.

Dmitry