This Thursday -> LLVM Compiler Social, Tech-Talk: Deep Learning is the Killer App for Polyhedral Compilation

Dear LLVM and compiler interested people,

this Thursday, 19:00, Dr. Oleksandre Zinenko will present Tensor-Comprehension, a Deep-Learning compiler jointly developed by Facebook, ENS Paris, and ETH Zurich at the LLVM compiler social.

Tech-Talk: Deep Learning is the Killer App for Polyhedral Compilation

Deep learning approaches are successfully applied to an increasingly large number of real-world problems, such as computer vision, speech or weather prediction. Their efficiency comes at a high computational and energy price. While the deep learning kernels are qualified as “embarrassingly parallel”, their implementations often fall short of exploiting the full power of modern parallel hardware. We propose Tensor Comprehensions, a new domain-specific language for deep learning workloads and a compilation flow combining more (LLVM) or less (polyhedral, in-process autotuning) conventional techniques. The language is restricted by design to the computations that can be effectively analyzed in the polyhedral model, yet it covers the vast majority of DL layers. Polyhedral transformation happens at a higher level than in existing tools, directly capturing the relevant information that otherwise would have to be extracted from a lower-level intermediate representation. Currently, Tensor Comprehensions targets NVidia GPUs through NVRTC or LLVM’s PTX backend, and modern CPUs with vectorization and optional parallelization through the Tapir extension of the LLVM IR. Initial evaluations show up to 4x speedups over vendor-provided libraries on computational kernels relevant to deep learning.

Oleksandr Zinenko is a research engineer at Inria and École Normale Supérieure in Paris, France working on usability and scalability of polyhedral compilation. He obtained his PhD from University Paris-Saclay for the work on interactive program restructuring. Oleksandr’s research interests include polyhedral compilation, program optimization and programming languages, all from a developer-centric viewpoint. He is a co-author of an interactive toolset to promote and disseminate polyhedral compilation techniques.

# Registration

# What

A social meetup to discuss compilation and code generation questions with a focus on LLVM, clang, Polly and related projects.

Our primary focus is to provide a venue (and drinks & snacks) that enables free discussions between interested people without imposing an agenda/program. This is a great opportunity to informally discuss your own projects, get project ideas or just learn about what people at ETH and around Zurich are doing with LLVM.

Related technical presentations held by participants are welcome (please
contact us).

# Who: - Anybody interested -

  - ETH students and staff
  - LLVM developers and enthusiasts external to ETH

# When: 09.08.2018, 19:00

# Where: CAB E 72

# What is LLVM ?

LLVM (http://www.llvm.org) is an open source project that provides a collection of modular compiler and toolchain technologies. It is centered around a modern SSA-based compiler around which an entire ecosystem of compiler technology was developed. Most well know is the clang C++ compiler, which is e.g. used to deploy iOS. Beyond this a diverse set of projects is developed under the umbrella of LLVM. These include code generators and assemblers for various interesting
architectures, a jit compiler, a debugger, run-time libraries (C++ Standard Library, OpenMP, Opencl library), program sanity checkers, and many more.

LLVM has itself grown out of a research project more than 10 years ago and is the base of many exciting research projects today:

https://scholar.google.ch/scholar?cites=7792455789532680075&as_sdt=2005&sciodt=0,5&hl=de

Best,
Tobias

Dear LLVM and compiler interested people,

this Thursday, 19:00, Dr. Oleksandre Zinenko will present Tensor-
Comprehension, a Deep-Learning compiler jointly developed by Facebook,
ENS Paris, and ETH Zurich at the LLVM compiler social.

Tech-Talk: Deep Learning is the Killer App for Polyhedral Compilation

Deep learning approaches are successfully applied to an increasingly
large number of real-world problems, such as computer vision, speech or
weather prediction. Their efficiency comes at a high computational and
energy price. While the deep learning kernels are qualified as
“embarrassingly parallel”, their implementations often fall
short of exploiting the full power of modern parallel hardware. We
propose Tensor Comprehensions, a new domain-specific language for deep
learning workloads and a compilation flow combining more (LLVM) or less
(polyhedral, in-process autotuning) conventional techniques. The
language is restricted by design to the computations that can be
effectively analyzed in the polyhedral model, yet it covers the vast
majority of DL layers. Polyhedral transformation happens at a higher
level than in existing tools, directly capturing the relevant
information that otherwise would have to be extracted from a lower-level
intermediate representation. Currently, Tensor Comprehensions targets
NVidia GPUs through NVRTC or LLVM’s PTX backend, and modern CPUs
with vectorization and optional parallelization through the Tapir
extension of the LLVM IR. Initial evaluations show up to 4x speedups
over vendor-provided libraries on computational kernels relevant to deep
learning.

Oleksandr Zinenko is a research engineer at Inria and École
Normale Supérieure in Paris, France working on usability and
scalability of polyhedral compilation. He obtained his PhD from
University Paris-Saclay for the work on interactive program
restructuring. Oleksandr’s research interests include polyhedral
compilation, program optimization and programming languages, all from a
developer-centric viewpoint. He is a co-author of an interactive toolset
to promote and disseminate polyhedral compilation techniques.

# Registration

Login to Meetup | Meetup