We have a few openings at the moment, I figured I’d advertise a bit!
At Google, We are building generic retargetable compiler framework that will allow large range of machine learning applications to benefit from diverse hardware acceleration.
Google is an “AI first” company, and we are making major investments to build infrastructure for existing accelerators such as TPUs and GPUs as well as the coming wave of “edge” accelerators for mobile, automotive, AIY, and other low-power applications.
We are looking for talented engineers to help us build the next generation compiler for ML frameworks (TensorFlow, JAX, S4TF, …). This effort is driven by the increased generality of accelerator hardware and programming models, as well as the need to enable rapid bringup of new devices - sometimes with wildly different capabilities and target markets.
While our primary customer is in the ML domain, most techniques generalize to other domain and share similarities with HPC and distributed systems. To support our goal, we created the MLIR framework and continue to develop it within LLVM, while leveraging it to build products and delivering solutions to our internal users.
== Job responsibilities ==
In this role you will contribute components of the compiler framework including the open-source MLIR implementation: infrastructure, compiler analysis and transformations. You will also be involved in projects using MLIR, like TensorFlow graph optimizations, TFLite graph converter, and code generation for accelerator using LLVM. We have a range of opportunities to match candidates with different levels of experience.
We are looking for software engineers with background and strong interests in compiler and systems work. Some familiarity with machine learning is a plus but not required. Experience with LLVM is very appreciated, as well as experience with other compiler frameworks.
Contact us: firstname.lastname@example.org