AI Discussion with Amy Hood and Kevin Scott
Monday, June 12, 2023
Amy Hood, EVP & CFO and Kevin Scott, EVP of AI & CTO
Who: Amy Hood, EVP & CFO and Kevin Scott, EVP of AI & CTO
Event: AI Discussion with Amy Hood and Kevin Scott
Date: June 12, 2023
Amy Hood: Welcome everyone. I'm Amy Hood, the Chief Financial Officer here at Microsoft, and I'm here with Kevin Scott, our Executive Vice President of AI, and our Chief Technology Officer. And I just wanted to welcome Kevin to talk directly with investors because, given the significance of the next generation AI, and our leadership position in this AI era, we wanted to spend some time talking more about this after the myriad of questions that we've gotten both in meetings and from investors on conference calls.
And I think this can help frame the growth opportunity ahead, really speaking with what I consider to be the person in charge of running the show here.
So, with that, let's jump right in. So, the first question, and it gets asked this way to me, Kevin, so I'll ask it the same way to you, which is, listen, we've seen multiple hype cycles in tech: crypto, metaverse. So how transformative will next-generation AI really be?
Kevin Scott: There have been multiple hype cycles in AI. When I was in grad school, we were in what is called an AI winter, because there've been so many boom and bust cycles, like there's a Wikipedia page describing what the busts look like.
I will say, I've been doing machine learning and AI work since I stopped being a computer science professor in 2003. And we've been on this really interesting exponential trajectory over the 20 years. And I think we've kind of been in the flat part of the curve for most of the 20 years. And we really started to see an inflection starting first in 2012 with this realization that we finally had all of the compute and all of the data that we needed to really extract performance out of deep neural networks.
And what we've seen, just in a remarkable way, and it's one of the reasons why we've done our investment with OpenAI, and why we've built all of this supercomputing infrastructure, and why we have been able to build this new category of applications that we're calling Copilots, is the models over the past handful of years have become to behave really like a platform.
So, they are things where you're not training them and conditioning them for a single use, but they're big pieces of infrastructure that you build, and you can use across many, many, many, many things. And if I look at how technology cycles have gone just in my lifetime, so there's been personal computing, there's been the internet, and there's been the mobile revolution.
And I think this is sort of the next thing on that progression of the evolution of technology where it really changes the way that we build software. The way that I've been talking about it to folks is just to try to describe how profound it actually is, is we've been writing computer programs and using computing technology the same way for 180 years. Ada Lovelace wrote the first program 180 years ago, and we've been doing things more or less the same way since then with more powerful computing devices and more powerful software abstractions.
But basically, if a human being wants to get a computing device to do something for them, you either have to be a skilled programmer, or you have to get a skilled programmer to anticipate your needs so that you can run the program that they've written.
And now we have AI tools that anyone can use to express a thing that you want to accomplish. And in a very flexible way, you can get the computer to do the thing for you. And we're not done on this progression yet just in terms of how much more capable the technology is going to get.
Amy Hood: This is an interesting one, Kevin, too, because I would say from my perspective in talking to customers, it's showing up in every conversation, and they aren't asking about the hype. It's very different.
They're asking, "Hey, listen, how quickly," based on really what you just said, is that it makes it so much more approachable for problem-solving, real problems that people in every seat in an organization feel and see opportunity in. And so, I think they're more asking when they can deploy it far more than, "Gosh, is this a real thing or not?"
So yeah, it certainly resonates, and given this broad sort of excitement, what should investors know about our AI strategy, and what differentiates that really? I think the amount of news, sheer news, that's come out in the past, it feels like a very long time, but it's really only been three or four months. Maybe you can help distill that difference of what our plan is and what makes that different.
Kevin Scott: Yeah, I mean, I think we've had a strong point of view about where we thought this AI platform was trending for the past five solid years now. We started a transformation inside of Microsoft about a year before we even did the partnership with OpenAI.
And part of that is we really believe that these models are becoming platforms. We believe that we're going to have a small handful of them and that we're going to use them as the foundation for all of the software that we're creating. We have rethought how we build all of our low level infrastructure from the way that we design and deploy data centers to what compute and networking looks like, all the way up to what the software development environment looks like. And we talked about a little bit of that this year at Build, like this notion of a Copilot stack, which is just as distinctive, I think, as the LAMP stack was at the beginning of the internet revolution.
The thing that I will say about us is we just have this one through line that we've been investing on the past five years. And the reason that you see all of this news is because we've been focused in the way that we built the platforms on top of which the applications are getting built and delivered. There's just no way we would've been able to produce as many of these Copilot applications. So, Microsoft 365 Copilot, Bing Chat, ChatGPT. I'm probably going to say a whole bunch of Copilots even that haven't launched if I let myself get carried away, but there's just no way that we would've been able to deliver all of that if there wasn't a real platform underlying what it was we were doing. Because it would've been too expensive, too painful, too time-consuming to do each one of those as an independent thing.
Amy Hood: I think what's interesting about how you talked about it, Kevin, is that because this platform is really, it's who we are, it's how we approach every problem as a company, people should think about our approach to AI being incredibly similar to that.
And that, I thought, and I would encourage investors to go back and look at the Build keynotes, because this key aspect is that there's every layer of the platform will be built with this in mind. And I think in particular, certain things really resonated. Microsoft Fabric is an example, right? That really resonated and broke through about what's really different.
I think for me it's also that it's hard sometimes to capture what these large technological transformations happen. I think you and I would say this is sort of a generational moment in terms of watching the innovation that's showing up across the world, across companies of all sizes. I think that's probably what makes it most exciting and in particular for, I think, developers last week. I mean, that's certainly a changing perspective.
And I always tell people you think, oh, about a platform. Well, then this audience always asks me, "Well, where does revenue show up?" It really comes from two places, right? There's the tools and services that customers will use, right, to build AI apps and services. They'll run the Azure OpenAI APIs, they'll run on our Azure infrastructure. And then to your point, it's going to be built into every Microsoft Cloud solution. Everyone. So, whether it's announced or not announced, whether it's named Copilot or not, it'll be that same aspect. There'll be something that shows up that's AI enabled. And I think that's the reason this'll be our sort of the next generation AI business will be the fastest growing $10 billion business in our history. I think I have that confidence because of the energy we're seeing.
Kevin Scott: Yeah, a hundred percent. And the other thing too, I will say, is that because it really is a very general platform, we have lots of different ways that that $10 billion of ARR is going to first show up. So, there is all of the people who want to come use our infrastructure, whether they're training their own models, whether they are running an open source model they've got, or whether they are making API calls into one of the big frontier models that we've built with OpenAI.
It's like all of these products, each of which has a slightly different potential for business model. So, like the way that we are going to be able to just sort of monetize all of this I think is really interesting. And I'm excited for the developers even on top of this, because we're probably going to get some business models that haven't existed before in history. We're going to have business model innovation happening as well.
Amy Hood: And that's really been a hallmark of almost any big technology shift is you both have technical transformation, platform transformation, and business model innovation. And I think it is always exciting. Having lived through maybe the first transition to the cloud that you and I were around for, the second one, I think, in some ways, investors ask me, "Gosh, I'm so worried about all the money it takes to invest in these."
And I try to remind people, as a company, we're in a very different spot. We're leading this cycle. We understand how to run at-scale cloud businesses. And you said something really important earlier, which is we've built a single platform from the ground up effectively, that's used consistently under every service. That's not where frankly we were a decade ago in terms of reusability and fungibility. So, I do think, of course, we'll see scaled CapEx, and there'll be COGS growth associated with that, but we'll also be in a far better spot than we started from. And the ability to leverage that and the confidence we're feeling from customers on demand, I feel quite optimistic about that.
Kevin Scott: Yeah, I mean, the other thing too that I will say on CapEx is it's CapEx that we're deploying, not because we feel like we've gotta go spend it to catch up, but it's CapEx we're deploying because we feel like there's such a gigantic opportunity ahead of us, and that there's going to be such a big return on that invested capital.
Amy Hood: Yeah, and I think that for you and I, puts our head in a different spot in terms of talking to customers and what we see from developers too.
Another question I get a lot, from both investors and customers, is our view on, I think really a topic that we read a lot: ethics, regulation, the overall safety when it comes to what AI is really capable of. And what can you share on these topics and the work we've already done with our responsible AI principles. Because I know this is one of the key components that you think about every day in the scene.
Kevin Scott: Yeah, it's one of those things where I think it actually is another strategic advantage in how it is that we are building the platform. So, I think you have to think about safety and responsibility and security of these systems as one of the necessary bits of the platform itself. You just can't have a platform without it for just a whole number of reasons, like maybe starting with customer trust.
You have to have trustworthy platform, trustworthy infrastructure, trustworthy applications that you're building. And we've been looking at this whole discipline of responsible AI for many years now. So, I think before I even joined Microsoft, Satya charged a bunch of people in Microsoft Research and then our legal team with building a real AI ethics and responsible AI practice. And we have been investing over the years on everything. So what do we need to put in the legal team? What do we need to do research on? What infrastructure do we need to build to help our product teams make sure that they can test and assess their systems for safety and responsibility.
One of the anecdotes I will share is we had all of this press around Bing Chat, the infamous Kevin Roose article, and the interesting thing there is, just even by deploying, we are learning so much about how not just to build things with safety and responsibility, because we believe that everything that we've launched has adhered very strictly to our publicly available Responsible AI Standard that's in its second version right now.
But we're even sort of learning how to use those safety systems to tune things to meet the preferences and expectations of users. And the Kevin Roose thing was basically, I won't even call it fixed, we adjusted the product within a couple of hours of that story breaking. And the only reason that we were able to make the changes quickly as we did is we had all of the safety infrastructure that we had built that allowed us to make sure that the change that we were making on that particular product to conform it to the preferences of users and to make sure that people weren't using the product in ways that we hadn't intended. Was because we had all of this stuff that we had built to let us do that assessment to make sure that the fix didn't cause more problems than it was addressing.
Amy Hood: And I think important in what you said is that, listen, we're not waiting for regulation to take this proactive approach to responsibility, and, I think, going through the process of that. But I've heard you also mention that the regulation is needed, and we've said that as a company. And how do you talk about that in your conversations?
Kevin Scott: Well, I think if you look at any sort of technology that has ubiquitous impact on society, there's always regulation. So, my favorite metaphor is electricity. So, you can sort of think of these AI systems, I think, and like I hope in time they will have similar sorts of impact where 10-15 years from now, you will just have a difficult time imagining how the world could function without them. And if we get to that point, you must expect that there's going to be regulations. So you'll have regulation on the generation bits. The same way that we have regulation on the power generators and the turbines and the generation facilities, We will have that for AI. There will probably be regulation on distribution as there is with electricity, and there'll be regulation on the sort of endpoint of the technology, where it actually gets delivered into a product that a user is touching. And so I don't know exactly what that regulation will look like. Brad did a really brilliant job, I think, a couple weeks ago articulating like what our initial point of view is. And we also believe that there's going to be a lot of debate and public conversation about what it is that regulation really needs to be. But, it would be really suspicious to me, honestly, and it would be a sign that the technology maybe isn't as impactful as we think it is if you believe that it doesn't need regulating.
Amy Hood: Yeah, I think that's an interesting way of thinking about it. The other thing I get asked a lot of from customers is the focus on privacy. I think this is not a surprise, right, as people learn about the technology, and obviously, just so people are clear, the customer data belongs to the customer just like it does today, that applies in the world of AI as well.
I can use the Azure OpenAI example, customer data remains within the tenant boundaries. The LLM is fine-tuned on their data, remains with their data, and is not applied to the master LLM. And so, I think that's just important, and we keep reiterating that with customers. Are you hearing the same sort of questions as you talk to people?
Kevin Scott: Yeah, absolutely. And I think what you just said is like the thing that we have to make sure everyone really understands. We're not using people's data to improve the foundation models themselves. All of the covenants that we have with everyone around their data, with respect to their tenant, are absolutely respected with this technology, whether it's an OpenAI model or something that we built, and we are super diligent, and again, it's one of those things where we're lucky to have spent the past 15 plus years building all of that infrastructure, because we've been able to rely on it as we've been deploying this new AI technology.
Amy Hood: Absolutely, and now I want to move on to, I would say, the second most popular question I get in investor calls at least, is regarding the OpenAI partnership. And why did OpenAI choose us? How did that partnership get started? And people are worried that we don't own the IP that's really required in this area to be considered a leader, and is that a valid concern? So those are the two big questions for you, Kevin, in this one.
Kevin Scott: Yeah, so let me start with the first one. I think, first and foremost, we've been able to do a bunch of really amazing work with OpenAI, and the work that we've done is a testament to how well aligned we've been from the start. The initial intention when we did the first deal in 2019 was we wanted a third party, someone that was outside of Microsoft in addition to our own research and engineering teams, to be able to come in with an incredibly high level of ambition for what they were trying to do with AI. That would put pressure on all of our infrastructure and system software so that we could make sure that we were building the best possible platform for training and serving these new large-scale frontier models, as we're calling them now.
And so I think, if anything, that has worked incredibly well. We've been able to design this infrastructure along with OpenAI. We very quickly, after the partnership, built our first AI supercomputer, that was the thing that GPT-3 was trained on. And in addition to them using our infrastructure, we've also been able to partner on using their models in incredibly creative ways to power our products.
The first of which was GitHub Copilot. We had deployed GPT-3 in a whole bunch of different ways relatively silently in a bunch of products. But GitHub Copilot was the first generative AI first product that I think anyone had really built where the generative AI was just front and center and allowed us to do a thing that people thought, five seconds before we launched, it was actually impossible.
Which is a little bit of the flavor of a bunch of these applications. You deliver them, and you go from in people's mind, like, "That was impossible," to, "Wow, I can't believe it." And so that has been the nature of our partnership.
So yeah, the way to understand our IP rights is we have a very broad license to all of the OpenAI IP for the term of the deal. And we work incredibly closely together on all parts of the stack. They have pushed us in incredibly helpful ways to make sure that our infrastructure and system software are world class. I think you see that right now in just the number of people who choose our cloud to do their own high-ambition things. And it has definitely helped us out in incredible ways in just sort of the scale and pace of delivery that we have. And yeah, we obviously benefit tremendously as well from OpenAI, who I think is one of the smartest group of AI researchers on the planet, but we also complement the things that they're doing as well. We have a really, really fluid partnership where we're very aligned on this platform vision, and we're very aligned on safety and responsibility, and this real mission and incredible obligation that we have to doing it right for society. And we just are pretty flexible about how we help one another.
Amy Hood: And I think for a lot of people, this has been a little bit what's hard to capture, is that you can be in this relationship with a company and a partnership, also have your own internal research capabilities, and we can partner with others. It's just a unique partnership, fundamentally, I think built on trust and mission alignment, which is always an important thing.
But just to reiterate, I think that the two key points you made is, when Kevin talked about having a broad perpetual license to all the OpenAI IP developed through the term of this partnership, that means even if the partnership were to end, we would still have those license rights to all the IP, up until AGI.
And the other thing that's critical is that, I think, and Kevin leads a lot of these conversations, as does our legal team and our OpenAI ethics team, I'm sorry, our AI ethics team here, we share the same values for responsibility and the safe deployment of this IP. And I can't tell you how foundational that is to being able to make progress.
And both of us can independently make this technology available to our respective users and developers. You're seeing that happen. I think that's good for everyone and really keeps the innovative spirit. And this AI capability is built into the Azure foundation. And so, I think it's sort of a long way of Kevin and I of saying that we feel very good about the position we're in both with the partnership and with access to the IP over the long-term.
Now, let's go back to I think what a lot of people always asking me is about, it's like, "Hey, listen, you're really hearing from customers that they're going to deploy this?" And, "What are the use cases we're seeing today?" And, "If I was to open my mind up, what are the use cases we'd see tomorrow?" And I would say, maybe I'll say some of the things I'm hearing, Kevin, and I think you probably hear even more, especially longer term what's possible, but we hosted CEOs on campus not so long ago, Kevin, and you hosted a conversation and I met one-on-one, but it was fascinating. I mean, it was every industry, whether it's healthcare, where we talk about Epic with medical records; whether it was manufacturing and talking about how to detect and describe problems between the plants and headquarters; whether it's education and having personalized tutors, or retail, or developers to your point, I think it's coming almost, I would describe it, it's a global conversation, and it's a by-industry conversation. So, what are you thinking in terms of use cases that are possible, and I think you're closer to some of those industry cases than I am for sure.
Kevin Scott: Yeah, I think very broadly speaking, this particular wave of technology transformation is going to be all about really driving productivity in places where people are doing cognitive work. And so this is just sort of a, it is totally wordy way of saying white collar work. So like your job, my job, anyone who is really using their mind to do something. What I think is going to be enormously obvious in the coming months is there's a huge amount of stuff that we all do in our cognitive work that is sort of routine, repetitive, like we're probably not super excited about it ourselves. It's the annoying, stressful parts of our job. I think almost every one of those things gets impacted in the short term by these tools.
And the way that we are thinking about them is this choice of the term Copilot to describe this new type of software is a really deliberate choice, because we believe that there are going to be lots of these Copilots, and that they're going to be, some of them, very general like Bing Chat or ChatGPT, but some of them like GitHub Copilot, relatively focused on a particular type of cognitive work. And so, I think whether it's software development, or it's marketing, or it's customer support, or doing prior auth for insurance. You just sort of pick your thing where I know that I encounter these things every day multiple times, where you're like, "I can't believe we're still doing things this way." Now is the moment where you sort of finally, I think, get to get past the "I can't believe we're still doing things this way," And you get to go do them a different way.
And I think in the longer term, the real prize here are these more generally capable AI systems that can really go solve big problems. The thing that I talk about with folks, and I don't know when this will happen, but it's sort of the aspirational thing. If you had an AI that had scientific expertise and the ability to go think about a problem for a really long time, and you said, "Hey, I want to go spend $100 million worth of compute with this AI expert to go cure pancreatic cancer in all of its forms. Like that would be the easiest $100 million that human beings had ever gone to spend. The amount of pain and suffering and economic losses and just general misery that some of these diseases cause. And that's, so, the aspiration for the future, can you have systems that can help us go solve problems that we can't solve right now, even using the best tools that we have. So, what is that better tool that helps us go solve those really nasty problems?
Amy Hood: And I think that's why, in so many ways, this time period where you can both be and have real concerns about the power of the technology, but far more than that, so much optimism about the possibility. And I think that those two things can coexist, and I think frankly always have in large transitions in history. And so just a few things. First, Kevin, thank you so much for joining us today. In some ways, I think there's no better person to have tell investors what our vision is, why it's different than the person responsible for making sure our vision and strategy gets executed to the very best of our capability.
And so, thank you both for taking on that large mantle for us and for joining us today. And thanks to everybody for watching and getting a bit more educated in terms of what's different about our approach. I will say, we've waited a long time for me to get more productive here at the company, Kevin. So maybe this is the final secret that gets me there and increases this CFO's productivity at least. And so, thanks everyone, and I look forward to delivering, really, Kevin, both for our customers, for developers, frankly, and for a society that I think is ripe for the adoption of this technology and our leadership position in it. So, thank you, and we certainly remain committed to leading this wave.
Kevin Scott: All right. Thank you, Amy.
Microsoft Corp (MSFT)
Follow us