Hi all,
Tomorrow (Thursday, 9am California time, 16:00 UTC ), Alexander Heinecke (Intel) will present Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads. Here is an excerpt from the paper:
TPPs define a compact, yet versatile set of 2D-tensor operators (or a virtual Tensor ISA), which subsequently can be utilized as building-blocks to construct complex operators on high-dimensional tensors. The TPP specification is platform- agnostic, thus code expressed via TPPs is portable, whereas the TPP implementation is highly-optimized and platform-specific.
[…] TPPs fit in the MLIR ecosystem/stack as a lowering dialect , and in this way the TPP back-end could be leveraged by multiple TC frameworks.
As usual the information to join the meeting:
https://meet.google.com/aue-vgas-egu
+1 218-301-8485 PIN: 255 745#
I’ll also update this thread with slides and recording after the meeting.