Hi,
The ldN like intrinsics (including all the ld1xN, ldN, ldNlane, ldNr, stN, stNlane) can use any pointer types. The definition (in IntrinsicsAArch64.td) of such intrinsics use ‘LLVMAnyPointerType’, which means we can pass any pointer type to such intrinsics.
E.g. I tried following case ld2.ll:
define { <4 x i32>, <4 x i32> } @test(float* %ptr) {
%vld2 = call { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld2.v4i32.p0f32(float* %ptr)
ret { <4 x i32>, <4 x i32> } %vld2
}
declare { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld2.v4i32.p0f32(float*)
It can pass and generate ld2 with “llc –march=aarch64 < ld2.ll”.
I just think it’s strange that the pointer has no relationship with the returned type. Currently there are IR regression test cases using different kinds of pointers like ‘xx.ld2.v4i32.p0i32’, ‘xx.ld2.v4i32.p0v4i32’ or ‘xx.ld2.v4i32.p0i8’, which looks confusing. Should we modify the definition of such intrinsics and restrict the pointer type?
If you agree with me, I suggest to use a pointer type to the vector element. Because the ‘arm_neon.h’ declare the ld2 intrinsic like ‘int32x2x2_t vld2_s32(int32_t const * ptr)’, which also uses a pointer to the vector element. To achieve this is easy, I have a patch to add a constraint ‘PointerToVectorElt’ in ‘intrinsics.td’. I just wonder if such modification is reasonable.
Thanks,
-Hao