What are the differences between ```memref<4x4xf32>``` vs. ```memref<4xvector<4xf32>>```

I try to educate myself on vectorization-related issues.
I’m sure I could gather information on this just by generating some code and looking at it. But I’m sure I would not get the intent behind the generated code, this is why I’m asking here.

The question is in the title: What are the differences between memref<4x4xf32> vs. memref<4xvector<4xf32>>. Are there alignment issues? Are the types meant to be convertible into one another using memref.cast or some other type of cast?
D.

The vector dialect doc and in particular the deeper dive section describes this.

TL;DR it is architecture and DataLayout-dependent.

If you are mentioning this description then I’m afraid I can’t find a response to my questions. Relation with LLVM is discussed more in depth, but not relation with non-vector MLIR data structures.

In a certain sense, this matches my experience when trying to interface your nice example code (working on vectors) with an application that has to convert data to feed your code, or get results out of it.

My apologies, it is indeed not a direct answer to your Q and could be refreshed with some of the following.

Here is a more targeted discussion on memref<...xT>.

In the particular case of x86 and T = vector<4xf32>:
sizeof(T) == 16B, align(T) == 16B, sizeof(f32) == 1B, align(f32) == 1B.

By default on x86, the alignment will be the next power of 2, so for vector<6xf32>, it will be 32B as LLVM introduces padding. Bitcasting in such cases has to be done in a very specific way to avoid manipulating padding data (i.e. garbarge).

This discussion related to individual bits and addressing from memory is related and may also be interesting to you.

1 Like

Thanks, that’s what I call food for thought. I’ll certainly come back for more after I digest it.