Project Adam and the Future of Programming

Published

Posted by Rob Knies

Trishul Chilimbi (opens in new tab)Perhaps you’ve heard about Project Adam over the last few days. That work, which shows that large-scale, commodity distributed systems are able to train extra-large deep neural networks efficiently, has received its share of attention in the tech media this week after being featured during the 2014 Microsoft Research Faculty Summit (opens in new tab) in the event-opening keynote address by Microsoft executive Harry Shum (opens in new tab).

Or maybe you saw the story On Welsh Corgis, Computer Vision, and the Power of Deep Learning (opens in new tab), which appeared on the Microsoft Research website (opens in new tab). That one was based on a fascinating interview with project colleagues Trishul Chilimbi (opens in new tab) and Johnson Apacible (opens in new tab)—one not dissimilar to Channel 9 (opens in new tab)’s engaging video discussion (opens in new tab) with Chilimbi as part of the Microsoft Research Luminaries (opens in new tab) series.

Microsoft Research Blog

Introducing Aurora: The first large-scale foundation model of the atmosphere

Aurora, a new AI foundation model from Microsoft Research, can transform our ability to predict and mitigate extreme weather events and the effects of climate change by enabling faster and more accurate weather forecasts than ever before.

It’s always instructive to conduct one of these interviews. One minor drawback, though: Good stuff inevitably ends up on the cutting-room floor. Sometimes, the interview just runs too long to include everything—or maybe a passage of it veers from the story arc developed before the writing begins.

As an example of the latter, when I talked with Chilimbi and Apacible, once I had exhausted my list of prepared questions, I ended, as has become a habit, by asking them if they had anything they wanted to add before we were through. Apacible had a great response, commending Microsoft Research management for its trust and support in backing such a risky project. That quote made it into the published story.

Chilimbi also had a great response, but when analyzing which quotes I wanted to use, his seemed to veer from the direction I saw the story taking. He wanted to talk about how Project Adam’s big-data, deep-learning approach, used to classify an ImageNet (opens in new tab) collection of 14 million images, could alter the future of programming. That wasn’t where I wanted the article to go, but it was captivating and thought-provoking, and in the interest of encouraging you to go watch the video (opens in new tab), here’s what you didn’t read on Monday.

“The one thing that’s interesting and fundamental to me is how [deep learning] changes how we think about computers and programming,” Chilimbi said. “Say I would program a system to do the ImageNet classification task. As a programmer, the way I might go about it would be, ‘OK, I’ll program something to recognize faces or eyes.’

“That’s traditionally how we write programs. People have written programs that sought to do image-classification tasks, and the accuracy of those programs are way below the automatically learned system that operates on this task.”

Then, as researchers are wont to do, Chilimbi pivoted from the specific to the general.

“What it’s saying,” Chilimbi said, “is this methodology of learning and providing vast amounts of data and computing to train a model is a way of synthesizing a system that’s more complicated than anything we can program today. That’s pretty interesting. It makes you think of the possibilities.”

Indeed.