Deep Learning Compiler and Optimizer

Project Overview

This project aims to build a deep learning compiler and optimizer infrastructure that can provide automatic scalability and efficiency optimization for distributed and local execution.  Overall, this stack covers two types of general optimizations: fast distributed training over large-scale servers and efficient local execution on various hardware devices.  Currently, our optimizations focus on many different parts of the system stack, such as fast distributed training over RDMA, automatic computation placement across devices, automatic operator batching and kernel fusion, tensor algebra compiler, sparse and quantization optimizations, and so on.

graphical user interface, application

Open-source Release

Some of our projects have been open-sourced, and welcome to try, contribute and collaborate with us.

Job Opportunity

 

 

People

Portrait of Jilong Xue

Jilong Xue

Principal Researcher/ Research Manager

Portrait of Lingxiao Ma

Lingxiao Ma

Senior Researcher

Portrait of Youshan Miao

Youshan Miao

Senior Researcher

Portrait of Wenxiang Hu

Wenxiang Hu

Senior RSDE

Portrait of Wei Cui

Wei Cui

Senior Research SDE

Portrait of Fan Yang

Fan Yang

Sr. Principal Research Manager

Portrait of Lidong Zhou

Lidong Zhou

Corporate Vice President, Chief Scientist of Microsoft Asia Pacific R&D Group, Managing Director of Microsoft Research Asia