It appears that the stats you listed are for movaps [SSE], not vmovaps [AVX]. I would assume that vmovaps(m128) is closer to vmovaps(m256), since they are both AVX instructions. Although, yes, I agree that this is not clear from Agner’s report. Please correct me if I am misunderstanding.
You are misunderstanding [no worries, happens to everyone = )]. The timings I listed were for vmovaps of the form,
vmovaps %xmm0, (mem)
i.e., its form as a 128 bit AVX instruction. Let me explain. There are 3 categories of instructions we are discussing:
- Normal SSE instructions.
- 128 bit AVX instructions which are just the same SSE instructions except encoded using the VEX prefix (and thus are non-destructive*). I will always refer to these as the 128 bit AVX instructions, never as SSE instructions.
- 256 bit AVX instructions which are the true AVX instructions (not that the 128 bit AVX instructions are not AVX instructions if you define AVX instructions via the presence of a VEX prefix, but I am speaking about how AVX in the mind of most programmers are associated with 256 bit operations).
First note that 1,2 are exactly the same performance wise. The difference in between 1,2 is as follows: When you use a 256 bit AVX instruction, you cause a ``dirty state’’ to be entered**. After that occurs, every SSE instruction used will cause the processor to save/restore the upper 128 bits of the ymm register aliased onto the output xmm register of the SSE instruction, resulting in bad performance. On the other hand, if you use the 128 bit AVX form of the SSE instructions, you are signaling to the processor that you do not care about the upper 128 bits of the aliased ymm register and thus it can just zero the top bits. Thus the bad performance is avoided. That is the whole point of the 128 bit AVX form of the SSE instructions, to enable you to mix SSE/AVX instructions without paying said penalty. Calling vzeroupper restores the ymm registers to a clean state allowing you to use the normal SSE instructions again without slowdown.
Additionally note that the 128 bit AVX instructions do not cause the ``dirty state’’ to be entered allowing you to mix/match with normal SSE and take advantage of the lack of implicit arguments/nice non-destructive encoding if you choose to (in case you can’t tell I like the non-destructive encoding a lot).
- This is important since SSE instructions with implicit operands (i.e. vblendvps) now have an explicit operand when instantiated as a 128 bit AVX instruction.
** NOTE The dirty state is not synonymous with the upper bits of all of the ymm registers being zero. See the Intel AVX optimization guide.
As I am sure you are aware, we cannot use SSE (movaps) instructions in an AVX context, or else we’ll pay the context switch penalty. It might be too big an assumption to assume that movaps and vmovaps have the same timings. Same for moved.
See above.
Also, I’m sure you are aware that the Sandybridge optimization guide suggests that unaligned stores be split into a 128b store and 128b extract. This does argue against my above assumption.
This is true. The reason that they suggest that is so that you avoid storing over a page boundary which causes obscene slow downs. As an aside if you are doing any vector coding, you should always align the stores and use unaligned loads.
For full disclosure, I have not timed the individual instructions; just kernels. So, my performance gains may be coming from another source related to this change. Most likely, my gains are from better use of cache, since we would not be moving unneeded bytes around. In the context of shared cache, this savings may be enough to keep the other cores more busy. Not to mention the stack space saved. But, I cannot say for sure right now.
I have actually timed said instructions in the past and reproduced Agner Fog’s results. I just prefer to speak by referring to facts that can not be misconstrued as hearsay = ).
But if you don’t believe me, time the instructions yourself (its an important thing to have in your toolbox anyways since sometimes Intel’s documentation can be non-specific). I have a small instruction timing project lying around somewhere, if you want it I can send it to you privately.
Michael