Microsoft Research presents its latest advances in computer systems at OSDI 2018

Published

By , Principal Researcher

computer data center

Researchers from Microsoft Research will present their latest advances in computer systems at the USENIX Symposium on Operating Systems Design and Implementation 2018 (opens in new tab) — the biennial flagship conference for systems research — October 8–10 in Carlsbad, California.

These advances cover a broad spectrum of topics, as evident by the number of papers coauthored by researchers from Microsoft Research, which constitutes about a quarter of the conference’s technical program (opens in new tab). These papers will be presented in nine out of the twelve different sessions. Many of these advances are a result of our collaboration with our academic partners, such as research interns visiting Microsoft Research for their summer internships.

Spotlight: On-demand video

AI Explainer: Foundation models ​and the next era of AI

Explore how the transformer architecture, larger models and more data, and in-context learning have helped advance AI from perception to creation.

The papers describe the latest breakthroughs and progress toward long-standing problems in cloud computing, artificial intelligence, distributed systems, blockchains, and operating systems, among other areas. Specifically, they make tremendous progress in improving the reliability, efficiency, and security of large-scale systems. Not only do these developments represent advancement over prior technology, several of these works have already been deployed in Microsoft products and services. Here, we preview some of these papers.

Orca: Differential Bug Localization in Large-Scale Services (opens in new tab)

Today, we depend on numerous large-scale services for basic operations such as email. These services are elaborate and extremely dynamic, as developers continuously commit code and introduce new features, new fixes, and — consequently — new bugs. Hundreds of commits may enter a deployment simultaneously. Therefore, one of the most time-critical, yet complex tasks toward mitigating service disruption is to localize bugs to the right commit.

In this work, researchers present the concept of differential bug localization, which uses a combination of differential code analysis and software provenance tracking to effectively pinpoint buggy commits, and introduce Orca, a customized code search engine that implements differential bug localization. On-call engineers (OCEs) of O365 Core, a large enterprise email and collaboration service, use Orca to localize bugs to the appropriate buggy commits. The authors’ evaluation shows that Orca correctly localizes 77 percent of bugs caused by code regressions and leads to a four times reduction in the work done by the OCE.

REPT: Reverse Debugging of Failures in Deployed Software (opens in new tab)

Microsoft is committed to providing high-quality services and products. Unfortunately, not all bugs can be found prior to release, so Microsoft constantly improves its services and products by collecting crash reports from customers for postmortem failure diagnosis. However, debugging such failures is hard because developers must speculate the conditions leading up to the failure based on limited information such as memory dumps, program end states when failures are detected. The execution history is usually unavailable because high-fidelity tracing is not affordable in production environments, especially when software failures are rare and consequently most traces would be discarded for successful runs.

In response, researchers created REPT, a practical reverse-debugging solution for production failures. REPT acts as a time machine, allowing developers to go back and replay the failure multiple times to better understand its root cause and devise an effective fix. Researchers realize this in two steps. First, they leveraged the highly efficient hardware tracing to log the control flow and timing information of unmodified programs at runtime. Second, they designed a sophisticated binary analysis algorithm to recover the data flow offline based on the memory dumps, as well as the control flow logged by the hardware tracing.

REPT has been deployed into the ecosystem of Windows. Its hardware tracing component runs on hundreds of millions of Windows 10 devices, its binary analysis component is integrated into Windows Debugger, and Windows Error Reporting is enhanced to support REPT.

Proving the Correct Execution of Concurrent Services in Zero-Knowledge (opens in new tab)

In this paper, my coauthors and I introduce a system called Spice. Spice makes a significant advance toward realizing a foundational primitive called verifiable state machines (VSMs), which has applications in third-party computing models such as cloud computing and blockchains.

A VSM is a request-processing service that produces cryptographic proofs establishing requests were executed correctly according to a specification. These proofs satisfy two important properties: They are very short and a verifier can check them efficiently without re-execution, and they are zero-knowledge, meaning that a verifier does not learn anything about the content of requests, responses, or the internal state of the service. Because of these properties, VSMs can be used to implement publicly verifiable versions of security-critical services such as payment networks, private stock exchanges, blockchains, and smart contracts —without exposing private, sensitive details of the service to a verifier or an auditor.

There is prior work that implements this primitive, but they incur prohibitive resource costs. Spice addresses these issues with new techniques that span multiple areas, including systems, cryptography, and theory. These advances enable Spice, running on a cluster of 16 Microsoft Azure servers, to achieve 488–1,167 transactions/second for a variety of services, including distributed payment networks, cloud-hosted ledgers, and dark pools. This represents an 18,000–685,000 times higher throughput than prior state of the art.

Focus: Querying Large Video Datasets with Low Latency and Low Cost (opens in new tab)

Large volumes of video are continuously recorded from cameras deployed for traffic control and surveillance. A key goal of these recordings is to answer “after the fact” queries, such as identifying video frames with objects of certain classes, such as cars or bags, from many days of recorded video. Such queries are used by analysts for planning, investigations, and many other activities. Current systems for processing such queries incur either high cost at video ingest time or high latency at query time. Researchers present Focus, a system providing both low-cost and low-latency querying on large video datasets.

Focus’s architecture flexibly and effectively divides the query processing work between ingest time and query time. At ingest time of live videos, Focus uses cheap convolutional network classifiers (CNNs) to construct an “approximate index” of all possible object classes in each frame. At query time, Focus leverages this approximate index to provide low latency, compensating for the lower accuracy of the cheap CNNs through the judicious use of expensive and accurate CNNs. Experiments on commercial video streams show that Focus is 48 times (and up to 92 times) cheaper than using expensive CNNs for ingestion and provides 125 times (and up to 607 times) lower query latency than state-of-the-art video querying systems.

Graviton: Trusted Execution Environments on GPUs (opens in new tab)

Trusted execution environments (TEEs) can serve as a building block of a secure and trustworthy cloud. While cloud operators are increasingly relying on accelerators such as GPUs and specialized AI processors, existing TEEs are restricted to CPUs and cannot be used in applications that offload computation to accelerators, creating an undesirable tradeoff between security and performance. The Graviton project investigates the design of TEEs on GPUs to enable applications to securely offload security- and performance-sensitive computations to GPUs with strong isolation from privileged attackers. This work demonstrates several points: Hardware complexity is low, as the proposed hardware extensions are limited to peripheral components of the GPU; the complexity of all security protocols can be hidden behind the GPU programming model; and the performance overhead is low.

Besides these papers previewed here, there are many other exciting papers that will be presented at OSDI by Microsoft Research. We look forward to sharing more details on these works at the conference!

Microsoft Research Contributions to OSDI 2018

Capturing and Enhancing In Situ System Observability for Failure Detection (opens in new tab)
Peng Huang, Johns Hopkins University; Chuanxiong Guo, ByteDance Inc.; Jacob R. Lorch and Lidong Zhou, Microsoft Research; Yingnong Dang, Microsoft

REPT: Reverse Debugging of Failures in Deployed Software (opens in new tab)
Weidong Cui and Xinyang Ge, Microsoft Research Redmond; Baris Kasikci, University of Michigan; Ben Niu, Microsoft Research Redmond; Upamanyu Sharma, University of Michigan; Ruoyu Wang, Arizona State University; Insu Yun, Georgia Institute of Technology

RobinHood: Tail Latency Aware Caching — Dynamic Reallocation from Cache-Rich to Cache-Poor (opens in new tab)
Daniel S. Berger and Benjamin Berg, Carnegie Mellon University; Timothy Zhu, Pennsylvania State University; Siddhartha Sen, Microsoft Research; Mor Harchol-Balter, Carnegie Mellon University

Focus: Querying Large Video Datasets with Low Latency and Low Cost (opens in new tab)
Kevin Hsieh, Carnegie Mellon University; Ganesh Ananthanarayanan and Peter Bodik, Microsoft; Shivaram Venkataraman, Microsoft / UW-Madison; Paramvir Bahl and Matthai Philipose, Microsoft; Phillip B. Gibbons, Carnegie Mellon University; Onur Mutlu, ETH Zurich

Verifying Concurrent Software Using Movers in CSPEC (opens in new tab)
Tej Chajed and Frans Kaashoek, MIT CSAIL; Butler Lampson, Microsoft; Nickolai Zeldovich, MIT CSAIL

Proving the Correct Execution of Concurrent Services in Zero-Knowledge (opens in new tab)
Srinath Setty, Microsoft Research; Sebastian Angel, University of Pennsylvania; Trinabh Gupta, Microsoft Research and UCSB; Jonathan Lee, Microsoft Research

The FuzzyLog: A Partially Ordered Shared Log (opens in new tab)
Joshua Lockerman, Yale University; Jose M. Faleiro, UC Berkeley; Juno Kim, UC San Diego; Soham Sankaran, Cornell University; Daniel J. Abadi, University of Maryland, College Park; James Aspnes, Yale University; Siddhartha Sen, Microsoft Research; Mahesh Balakrishnan, Yale University / Facebook

Orca: Differential Bug Localization in Large-Scale Services (opens in new tab)
Ranjita Bhagwan, Rahul Kumar, Chandra Sekhar Maddila, and Adithya Abraham Philip, Microsoft Research India

Gandiva: Introspective Cluster Scheduling for Deep Learning (opens in new tab)
Wencong Xiao, Beihang University & Microsoft Research; Romil Bhardwaj, Ramachandran Ramjee, Muthian Sivathanu, and Nipun Kwatra, Microsoft Research;Zhenhua Han, The University of Hong Kong and Microsoft Research; Pratyush Patel, Microsoft Research; Xuan Peng, Huazhong University of Science and Technology and Microsoft Research; Hanyu Zhao, Peking University and Microsoft Research; Quanlu Zhang, Fan Yang, and Lidong Zhou, Microsoft Research

PRETZEL: Opening the Black Box of Machine Learning Prediction Serving Systems (opens in new tab)
Yunseong Lee, Seoul National University; Alberto Scolari, Politecnico di Milano; Byung-Gon Chun, Seoul National University; Marco Domenico Santambrogio, Politecnico di Milano; Markus Weimer and Matteo Interlandi, Microsoft

Graviton: Trusted Execution Environments on GPUs (opens in new tab)
Stavros Volos and Kapil Vaswani, Microsoft Research; Rodrigo Bruno, INESC-ID / IST, University of Lisbon

ASAP: Fast, Approximate Graph Pattern Mining at Scale (opens in new tab)
Anand Padmanabha Iyer, UC Berkeley; Zaoxing Liu and Xin Jin, Johns Hopkins University; Shivaram Venkataraman, Microsoft Research / University of Wisconsin; Vladimir Braverman, Johns Hopkins University; Ion Stoica, UC Berkeley

Continue reading

See all blog posts