What sort of IR construct should one use to implement filling each
element in an array (or vector) with the same value? In C++, this
might arise in "std:fill" or "std:fill_n", when the element values in the
vector are identical.
In the D language, one can fill an array or a slice of an array
by an assignment, e.g.
"A[2..10] = 42;"
1. What I would prefer is an explicit intrinsic, call it "llvm.fill.*" that
would work similar to the "llvm.memset.*" intrinsic. The memset intrinsic
only works with byte arrays, but provides wonderful optimizations in the
various code generators. Hopefully, these similar optimizations would be
implemented for "llvm.fill.*".
2. Given that I probably won't get my wish, I note that some front-ends use
vector assignment:
store <8 x i16> <i16 42, i16 42, i16 42, i16 42, i16 42, i16 42, i16 42, i16
42>, <8 x i16>* %14, align 2
Does this work well for architectures without SIMD?
What chunk size should be used for the vector, and is that architecture
dependent?
3. If vectors are not used, but rather an explicit loop of stores,
element-by-element, will this be recognized as an idiom for
architecture-dependent optimizations?
Obviously that isn’t great for patterns, and LLVM can recognize pattern that can fit memset_pattern on targets that supports it, which looks like closer to what you’d like I think.
I don't want to do a memcopy of the 20 element constant array, and memset
doesn't work here. I want an intrinsic that copys the scalar int constant 42
to each element of the int array.
Like a list initializer, but will do partials/slices?
More than just an initializer, and yes it must do partials/slices. More like
std:fill in C++;
Are you just looking to create an intrinsic that will generate a jump to a lib
routine?
If all else fails, then call a library routine. But if the number of elements
is "small" and suitably aligned, them perhaps several elements can be filled by
a simple load a constant and store.
Back in the day, we called this a BLT (block transfer, pronouced ‘blit’) for the PDP-10 instruction of that name.
On x86, there is REP MOVS.
You can do this by assigning the first value, then do an overlapping memcpy to fill in the rest.
This is undefined behavior, and really doesn’t work with most modern memcpy( ) implementations (the most common thing these days is to simply implement memmove( ) semantics for memcpy, which will very much not do what you want).
While I am old enough to have programmed on the PDP-10, I didn't remember BLT.
However none of this answers my original questions. Something that produces
REP MOVS might be a step in the right direction. But what of other architectures?
Take a look an memset (byte patterns), and memset_patternX (multi byte patterns, only currently supported for selective targets). In general, support for fill idioms is something we could stand to improve and it's something I or someone on my team is likely to be working on within the next year.
Today, the naive store loop is probably your best choice to have emitted by the frontend. This loop will be nicely vectorized by the loop vectorizer, specialized if the loop length is known to be small, and otherwise decently handled. The only serious problem with this implementation strategy is that you end up with many copies of the fill loop scattered throughout your code (code bloat). (I'm assuming this gets aggressively inlined. If it doesn't, well, then there are bigger problems.)
Moving forward, any further support we added would definitely handle pattern matching the naive loop constructs. Given that, it's also reasonably future proof as well.
I already use memset for byte patterns, but was unaware of memset_patternX; I
will look into it. Based on your observations, I guess I will go ahead with
the naive store loop approach for now and hope for "llvm.fill.*" in the future.