This is the Trace Id: 44d972fcf1f2c2ef407d9e610931e854
Skip to main content
Investor Relations

Wells Fargo TMT Summit

Tuesday, December 3, 2024
Mat McBride, CVP & CFO Microsoft Margin Strategy and Investment Optimization, Microsoft

Transcript

Who: Mat McBride, CVP & CFO Microsoft Margin Strategy and Investment Optimization, Microsoft
Event: Wells Fargo TMT Summit
Date: December 3, 2024

Michael Turrin: Okay. Looks like it's 12:00, kick off.

Mat McBride: All right. Let's get going.

Michael Turin: So very much looking forward to this session. This is our lunch keynote session, Day 1 of the Wells Fargo TMT Summit. Very pleased to have Mat McBride with us from Microsoft.

Mat, your title is -- I have your LinkedIn profile correct, CFO of Margin Strategy and Investment Optimization.

Mat McBride: Yes.

Michael Turrin: Which is very top of mind and appropriately...

Mat McBride: It's all about capital allocation in my world.

Michael Turrin: Nebulous title.

Mat McBride: Yes.

Michael Turrin: You've been in the Microsoft Cloud Infrastructure Group for some time as well. So I think there's probably more than we could cover in the 35 minutes that we have. But I guess just in just framing your background, maybe walk us through what your priorities, what falls under your purview in terms of roles and responsibilities within that? And then I think I have a good idea of where the conversation will go from here.

Mat McBride: Yes, sounds good. So I think the title outside said “Head of CapEx” or something like that. I think that's probably a good layman's definition of what I do. I mean I'm all about -- if you think about Satya and Amy's job as capital allocators, whether that's capital or resources in the form of headcount or any other form of spend, I cover the majority of that across the company. So that's kind of my job.

Michael Turrin: Okay. I guess I want to try to frame this in a few different ways. But just correct me if I don't have this right. Were you involved in the same role, the same type of group during the Azure build-out cycle? What's your -- sort of what's your level of experience or time line with respect to Azure?

Mat McBride: Yes. Sure. So, I think from my time in commercial, I actually started the company in the Bing business and was the Bing CFO for a little while.

Michael Turrin: Satya was over there, too.

Mat McBride: Yes, totally. I was actually a finance partner for a long time. So that ended up being fortuitous.

Michael Turrin: That makes sense.

Mat McBride: Yes, so that was super helpful. And so after I left that, I did the Office 365 job, right, when we were kind of turning it into a service, and it was -- back then it was called BPOS B, Business Productivity Online Server. And then I did that for a while and then moved over to be the CFO of the Intelligent Cloud division for a while. And then now I have, for the last almost 8 years, had this horizontal cost management role for the company.

Michael Turrin: Okay. So what I'm curious about, and there are probably several different layers we can get into with this is just comparing and contrasting the build cycle that Microsoft is undertaking currently towards AI. And we've seen the numbers versus what the Azure build-out cycle was like. And if there are similar muscles that you're able to exercise that give you an advantage, one thing that seems like it's very different or somewhat different is Azure was following AWS to some degree, whereas in AI, you have a more pronounced lead in the market. But I'd love to just start with that train of thought and get your perspective on how you compare and contrast the two and then have some more specific questions to get into.

Mat McBride: Yes, sure. I think it's definitely different when you're playing catch-up if you're kind of saying, look, the market has already been made and you're going out and developing infrastructure and capability that lets you go fast follow and become successful in a new endeavor. We certainly were able to leverage our installed base, our relationship with customers on the on-premise world and transition that to the cloud. And we did that. It started out very much unprofitable.

We worked very, very hard to come up with what we think is -- are the offerings that customers wanted because of our deep relationships with them. And we worked over many years to develop what now is a great story around profitability, around gross margin for the company. And it was just really about chipping away at it day after day after day. And this is different in the sense that we're leading from the front, nobody has done it before. But we're certainly leveraging all that DNA that we built in terms of how to think about driving efficiencies, developing technology, working with customers to be centric about what their needs are and developing solutions that will help them in whatever their endeavor is. And so it's -- we leverage a lot of the DNA, the processes, the capabilities that we built, but it definitely feels different. I would say we are working very, very, very rapidly right now. It's a breakneck pace inside Microsoft.

Michael Turrin: What -- when the Azure build cycle started, there was like a legend that there was actually like memos being passed around, around the move to Internet services and I think there was a lot that went into that. How pronounced is the focus on Generative AI or AI in general within the company? Is it similar in terms of the underlying focus across the organization at this point?

Mat McBride: Yes. I mean, it definitely is. When we first started, it was like there were people with like AI roles. And now basically, most people's roles touch it in one form or another because we're trying to take that AI innovation and put it into all of our products and services and ways to deliver new value to customers. And so there really aren't -- I mean, there are some, but it's not like there's an AI team and a legacy CPU storage team or Office team that's separate from what is the AI capability we're trying to infuse in all the products. And you've definitely seen that with just the breakneck pace of innovation since ChatGPT was launched, what Microsoft has been able to deliver to the market, it's faster than we've ever done, has been really impressive. And that only happens when you have Satya, I think, has done an amazing job mobilizing people around a set of priorities and focusing everybody on how to deliver and go to market very quickly.

Michael Turrin: I think one thing that investors are continually trying to figure out is how to gauge the CapEx spend increase within Microsoft. And I think management has been clear in giving us an AI contribution metric on Azure, communicating that this is demand-driven. This isn't just building for the sake of building. What's your perspective on us thinking about CapEx as sort of a leading indicator in terms of like we're thinking about the demand signals and your focus on building out in front of those?

Mat McBride: Yes. I mean, to some extent, yes, it's definitely a signal for sure. It represents customer and partner signal. The good thing about where we're sitting in our position in the market is we are leading. And so we get a lot of really great signal about what types of solutions customers are after, what the demand looks like with our partnerships, we're able to understand what the future of the research side of the equation is looking like and evolving to and it changes every day. And I think it gives us the ability to take that signal and aggregate it into how do we think about building something for the long run. And we do think about it that way. There's like short-term CapEx and long-term CapEx, and we're trying to give more and more insight into the differentiation as you've seen us on our earnings calls. And we try to take that signal and translate it into something that makes sense with as much planning for the future as we can at a time like this. But it definitely is, I think, to some extent, it is representative of the customer signal. And I think we have a -- because of our position, we have a good sense of what that looks like.

Michael Turrin: Okay. I mean one of the things you touched on was one of the things I had jotted down, which is Amy's characterization. It's like half of this is long-term -- long lifetime investments and half is more to service current demand. For you in the field implementing, how would you parse that? Like what's the additional context you would add, given you're deeper in the trenches in terms of where these investments are going?

Mat McBride: Yes, sure. I mean when you think about the long term versus short term, what we're trying to talk about is like our infrastructure, things that have a longer lead time to build versus something that's shorter term like a server that we can late bind based upon what the signal we see. And so it's really about saying how do we have the most fungible form of infrastructure for the long run so that if the demand spikes up even more than we expected, we have the data center shell. We've secured power. We have fiber put in the ground or across the ocean that we need to go deliver. And then we late bind the remaining CapEx based upon the shorter cycle of the signal that is -- we're not talking about multiple years there.

Michael Turrin: So the other thing Amy has been telling us about for the past couple of quarters is supply constraints, more in the context of the AI contribution metric, but we've been conditioned to believe that there's a supply constraint that's weighing on growth to some degree that is expected to ease-- some of the burden to ease in the second half of your fiscal year.

Kind of a similar question, you're more in the weeds on where the supply constraints are and what's happening there. So what added perspective can you share on what's driving the kind of the expected change in the shape of AI contribution and Azure growth throughout the fiscal year?

Mat McBride: Yes. I mean the signal that we ingested a couple of years ago, right when ChatGPT launched was not something that would have been in any of our modeling in terms of an error bar. And so our first cycle when we saw the demand was to basically say, great, what megawatts do we have, what data center space do we have? And how do we go maximize how we can get leverage from the existing capacity we have? So we went out and created a whole team full of folks to go create efficiency metrics and drive efficiencies through the data center capacity we had, to free up megawatts to go plug in more GPUs as best we could. And then at some point, we said, hey, look, we probably have even more demand than we have supply for. And so -- we've done everything we can to bring in other capacity that we wouldn't normally have done and to make sure that we can supply the needs from customers as much as we could, while at the same time, trying to build the capacity we needed for the long run so that we weren't caught in the same position with something that was very outside an error band of any P99 or P99.999 or some other number you might pick, in terms of probability of likelihood of the spike. And so I think we're in a lot better position now. We've, I think, gone through a maximization phase, and we're bringing in a lot of the long-term CapEx. And that's why you saw such a big number as a percentage of the total CapEx in the last couple of quarters.

Michael Turrin: Is there a way to think about how far you're planning for in the future when we're thinking about the longer standing?

Mat McBride: Yes. Yes. Look, I think it's -- without giving specific numbers, you should assume that when we're looking at a very robust opportunity when we think about -- I'll let Satya and Kevin Scott and all these other luminaries in the industry talk about this being a big moment in history. When we think about the opportunity that's ahead of us, the ability for us to go out and grow against it, I think we've had to be -- look at constraints in a different way over a longer horizon. And that's the way we plan. So, if you think about it, if we have a demand signal that has different error bars, different scenarios that we might run, we have to look out in the future and say, hey, do we have enough power generation in a certain location? There's a lead time for power generation that's a lot longer than to build a server, for instance. And so we look out long enough to find constraints based upon the big opportunity that we see and the opportunity to grow. And then we say, hey, what are we going to do about that? And so we're planning out kind of in lots of different horizons depending on the part of the supply chain that we're talking about.

Michael Turrin:  I mean one question you've probably gotten this morning that we also feel is just -- it seems like there's variability in how this could play out?

Mat McBride: Yes.

Michael Turrin: Right? And so -- but you're making kind of long-standing investments in some way. So what can you say in terms of what informs the confidence there? And like how many different scenarios you're able to take into account?

Mat McBride: Yes. I mean I think if we talk about the capital we're deploying in the long-term basis, we're trying to make that as fungible as possible, right? So it's a data center shell. It has power. We can put lots of different configurations in that building. And so we're going to keep that as fungible as possible, as long as possible as we look into the future. And we'll try and late bind the last decision of what we -- how we kit out that building as much as we possibly can, to give ourselves the most flexibility for something changing over time. And it does change. I mean we're learning new things all the time. It's real research. And so how the models and how the research happens on the data center, happens on the servers, is evolving every day. And so we have to be agile in that way. And so it's good old-fashioned just-in-time supply chain with a bunch of different scenarios and managing it, yeah.

Michael Turrin: Does having an Azure provide an advantage in terms of how you think about potential repurposing, if that were to be-

Mat McBride: Absolutely. Absolutely. I mean I think the example I talked about earlier, when we had the GPU spike and what we're able to do because we're able to take space that wasn't already set up for GPUs and allocate it to GPUs is an example of how we try and be as agile as possible based upon the signal changing, and we'll try and do that into the future, and it's definitely something that we're building into our muscle and our capability right now.

Michael Turrin: How does the company think about and how do you think about just the balance in investing in proprietary technology for these efforts versus leveraging third-party models, third-party tech and capabilities that are out there? There's a lot of reporting on Microsoft doing different things at any point in time, but how do you kind of assess and balance the trade-offs and advantages between the two?

Mat McBride: Yes. I'd say, first and foremost, we're very customer-centric. We got to be where our customers are. And so as their needs evolve, we want to evolve with them. And there are certain things that customers want in this equation that are hardware-specific, where what they want is a certain type of, for an example, silicon, and we want to deliver that to them. And then there are some things where the actual underlying silicon is abstracted for all of our API calls, for instance, we can have the API calls run on NVIDIA, AMD, or even our own silicon that we've built. And so diversity gives us the ability to, I think, match where the customers' need is. And then where we have the ability to optimize for cost and other things where we can do that in an abstracted way from the customer, it gives us the ability to drive more and more efficiency long term in terms of how we deliver the cloud and the unit economics.

Michael Turrin: The -- and what you just described seems like it's been consistent since Satya. And it's probably not this discrete, right? Like we have this outside perception that like things naturally change when leaders change, and I suspect it's more fluid than that. But he's certainly embraced openness right throughout the tenure either-

Mat McBride: For sure. We've been that way with Linux-

Michael Turrin: Exactly.

Mat McBride: From the get-go.

Michael Turrin: The iPad.

Mat McBride: Yes. And this is about being -- again, it's about back to the customer centricity. If a customer wants a certain open source model, we want to be able to deliver it to them because we're in the solutions business, we're about finding out what the customer need is, and we're going to go help them deliver it because if we do that, it's going to bring with it a whole other basket of Microsoft goods, not just time on a GPU, which is the -- again, is interesting, but not as interesting as being a solution-oriented company that delivers deep value with customers and something that's durable.

Michael Turrin: Does that map well to just the thoughts on AI strategy and leveraging different models because it feels like there's been a lot of focus on the OpenAI and Microsoft relationship. But if I think about Microsoft's customer centricity, there would be inherently a desire to preserve flexibility.

Mat McBride: Yes. I think we've been very like consistent and clear on that point. I mean we're happy with our OpenAI partnership. It's great. It's been fantastic to learn together as we find the market and as we learn how this technology is going to evolve and change all of our lives. And at the same time, we respect all the other options that are out there, and we want to be able to offer those solutions to customers. And so we're very much -- on one hand, we love the partnership and we want to be customer-centric and we want to help them do whatever they need to do. And that's been a very -- as you pointed out, a very consistent position Microsoft has had.

Michael Turrin:  Yes. I'm curious, you've hit on a couple of times, just the rapid pace of what you're doing.

Mat McBride: Yes.

Michael Turrin: I've heard a similar anecdote. This is a breeze for you right now, right, sitting up here, allow your phone-

Mat McBride: Yes. This is the most comfortable I've been in a long time.

Michael Turrin: But I'm curious, given we have this perception to some degree that like just the build cycle is the priority. And there are various signals that suggest that just finding capacity is at a premium. What are the sort of metrics you're tracked on? Or what are the KPIs for your role? Like how do you balance? Obviously, you're not just going and scooping up capacity wherever you can find it, right? There's definitely some nuance. Microsoft always has a P&L-centric approach to how you're measured. So I'm curious, like for you and your role, how do you -- how are you balancing the speed with nuance and finesse and the other things that are likely necessary?

Mat McBride: Yes. I think in this moment, where we -- there's a market being made right now, and there's real need to be able to have capacity as the market is being made. And we're very mindful of that. You can say that, that might be being share-focused or some other term that other people might use. But I think we're definitely saying like we really want to make sure without being foolish that we have what customers need capacity-wise to go do the work they need to go do at a time when the market is being made. And I think that's a prudent way to think about it. But that doesn't mean that we have to do that inefficiently at the same time. We can still be inefficient. We're not turning a blind eye to unit cost. And then there's certainly, you know, opportunities for us to go even after we've brought in the hardware and gotten the data center all set up to go find new efficiencies over time. And maybe an example I would give you is as models evolve, the amount of context those models take in at the time when a prompt is made is growing. And the ability for a model to take in all the context, either the prompt itself or something like session data is what's called attention. And attention is very challenging when it comes to GPU throughput. When you think about this expensive asset and how you have it run efficiently through a thing, you really want something that is really efficient to go do that. You can talk about it as a “cost per” or a “revenue per” kind of a metric. And we're learning all the time about when you expand context. And when you think about the attention that's required for a chain of thought, reasoning mechanisms take all that context into account to give you a better result, it creates a choke point on the GPU. And so we have amazing teams of researchers looking at ways to say, how do we manage that and there are ways for us to be more intelligent about that choke point and make it so that we get more efficient on the GPU itself, even after you've plugged it in, right?

And so we have -- again, we established this mechanism to go do this work back in our original days in the cloud, we have a whole group of researchers. We have teams of engineers that go drive efficiency projects, and I hold councils every month where people come in and say, here's my efficiency plan and I give them money or I give them headcount to go drive these efficiencies to develop positive ROI. And we just extended that capability into this new world. And of course, we had to have different talents, in some cases. But it's been a very natural transition for us to be able to run fast, but also be able to apply these efficiency concepts to an ever-evolving, right? I mean the new model comes out, the token window goes up. Therefore, the number -- the size of the prompt goes up, the sessions get bigger and context and attention goes up. And therefore, we have a new problem, great capability, but we need to be able to deliver on a unit cost or revenue per metric to get back to your metric question, that we drive with the team to go say, how do we go do this? And it's really been super fun to be -- if you're nerdy at all, like it's been a very fun ride, I think, for me to be part of this deep technical challenge that results in like you can look outward and say, “look, this is how much we saved or this is how much more throughput we're getting on this super scarce resource, which means we have new flops to be able to allocate to a new research team to go drive something new or we can go monetize those flops with the customer.”

Michael Turrin: How has like throughput cost consideration evolved relative to what you would have expected thus far? Because it seems like every point in evolution, there is this view that cost curve is coming down and could come down at a more pronounced rate. But I'm curious what you were expecting two years ago, if you had expectations at that point in time versus what you've seen?

Mat McBride: It's an interesting one, because the dynamic I just talked about, if the research was semi static, then you would have a different answer to that question, but the research is still scaling and still growing. And the dynamic and the customer engagement and what the model can do is evolving. And so I feel like it's a two-sided coin you're constantly flipping. What's the new thing coming in? How do you go drive the new capability to offset maybe the throughput tax that's going on it. And -- it just feels very day in, day out, an evolving thing that we're just staying very hungry on and pushing super hard to drive the right mindset and the right allocation of resource to both adoption and speed and then efficiency on the other side. But like I say, like GPUs come out every 12 to 18 months, like that's tricky. That's the hard thing.

Michael Turrin: Here's something I'd be curious to get your thoughts on. So before this AI wave, there was a common conversation across investors on where we were in terms of the overall cloud migration journey. Azure's growth rate started to slow and other hyperscalers started to slow. And I started to get these questions, and I suspect they did, too, around why didn't you tell me public cloud migration was in the seventh or eighth inning. It doesn't seem right, but the numbers were telling a different story at that point in time. If I ask you, where you think based on the evaluation that you're doing, just from a cloud perspective, AI, I'm sure, is much earlier. What would you say in terms of the inning you think we're in, in the public cloud migration journey?

Mat McBride: Is this a baseball game that's going to end in 9 innings? Or are we going in like...

Michael Turrin: Well, let's keep it 9 for now.

Mat McBride: Okay. All right. Yes. No, I'm just joking -- look, I don't know what inning we're in. What I feel though is that we're in a world where we're at -- what customers need in an AI-driven world will include other services that aren't just the GPU time. When you think about something like fine-tuning or a distillation workload, you have other things that come with it to be able to go do the work to develop the model. And then if you have an app then and if you're an ISV and you have an app that actually uses that, you have to host it, you have to deliver it, and it has different needs than it did before you had the AI workload included in it. And so the way I would say it is the more agentic something becomes, the more it brings along with it other compute and storage services, either PaaS services or IaaS services that go along with it, to go complete tasks and to go do the things that needed to be done. And we're learning more about this every day, every day. And so we do -- we think about doing something like capacity planning for the long run. When we first started this journey, it was like where can we plug in GPUs. And now it's more like how -- for a given GPU, how many more CPUs and storage do we need to be able to complete various workloads in various scenarios, depending on what the customers need.

Michael Turrin: Another common question that we get that I'm sure they get as well is just around when more generative monetization could show up in the application layer of software. And I'm wondering if there are things that you see from your perspective that are inhibitors to that? Do you think that some of this is more, still just a function of like need for capacity? Or is it just we're all too early because it's only been two years since ChatGPT really had its moment with some.

Mat McBride: It doesn't feel like two years from-

Michael Turrin: Does it feel like five or one?

Mat McBride: It feels like -- yes, more like the five scenarios. I think that the -- I think we're going to have two things happening. You're going to have users that are engaging with these applications, how agentic they are is to be determined depending on what the customers or what the partners and customers are doing with them. But the more easy it is for someone to adopt, the faster the adoption goes. And I think that's going to be one of the challenges we look about out into the future and we say, how much faster could it grow? Or what do you think those constraints look like that aren't supply chain versions of constraints? I do think that the user adoption and unlocking those scenarios in ways that are just human nature, how easy is it to adopt will determine the adoption. And I think that's definitely something that is evolving every day, and we will continue to do so.

Michael Turrin: Are there certain use cases or types of applications that you see are just the most intensive from a resource perspective? Like when you're planning, what are kind of lower intensity uses versus higher intensity uses?

Mat McBride: Yes. I mean I think something like a simple API call is probably kind of on the lower end of something. And then on the other end of the spectrum, you might have something like a distributed training job for a frontier workload, a frontier model, and everything in between.

I think the kind of intensity is something, again, depending on the workload and depending if you're talking about just GPUs or if you're talking about networking, you're talking about storage, you're talking about associated CPU compute, it is just- is quite different depending on what the customer is trying to do, which workload. And that's the cool thing is we're not just talking about two workloads anymore.

We're talking about lots of new things and not lots of new capabilities. And I think that will continue to evolve over time. And it's all intensity. We're trying to sweat assets, right? We're trying to keep utilization as high as we possibly can and keep usage up. And so intensity in my world from a unit cost perspective is a good thing. We like that.

Michael Turrin: Are you -- like how tied to headlines we see on things like nuclear capabilities potentially being on the road map for just enablement? Is that something that's within your purview that you have just some high-level thoughts around?

Mat McBride: Yes. I mean I don't -- I think there's a lot of interest in this one. I think for us, we're looking at how do we go get the power we need, where we need it, in the most sustainable way possible.

It's very simple. It's not -- I'm not trying to be super boring, but it's like we're very boringly about the delivery and the customer and where do we need it and what do we need and how do we do that in a responsible way when we consider sustainability. And so nuclear has a great role to play in that world. And in some cases, it makes sense. And we've done nuclear deals. You've all seen press about that. But I don't -- it's always going to be “the where” and “the what” and “the who” first before it's like what type of power we're going to get.

Michael Turrin:  If I had your profile correct, there's a margin component, like a cloud margin component to your role as well. Is that right?

Mat McBride: Yes.

Michael Turrin:  So can we just spend a little bit of time talking about the balance? I think when generative really started to capture attention, there were concerns across software around gross margin degradation. And we haven't seen it to the degree that was expected. I'm curious from your perspective, why that maybe hasn't been the case and how Microsoft is able to control margins to the degree that they have, even though we're seeing strong evidence of an investment cycle also taking place.

Mat McBride: Yes. I think I've talked a little bit about that in some of my previous answers. I mean I think, first and foremost, we're getting as much leverage from what we already had installed, and that creates a tailwind. And then you have to go get those megawatts replaced because they had a purpose and they had a use. And so there's certainly -- there's a short-term thing where we're trying to really drive up utilization.

There's some of that, that we get that will continue for the long run. I think that as we look forward, we're really going to be saying, hey, how do we go have the capacity we need and depending on which scenario of demand we're looking at and then try and late bind as much of the cost as possible so that we can adapt to an evolving market. Like that's going to be really where we're going to be focused on in terms of how do we deliver it for the best unit cost that we can. But with an eye to the market being made now, we want to have capacity for the customer and being really focused on delivering for them, is still kind of the focus.

Michael Turrin: How much, if at all, does macro come into your planning purview? Because the reason I ask it that way is because this is a long-term technology trend. Inherently, macro should matter somewhat less. But how much is taken into account, just the environment that we're in, in terms of the planning cycle?

Mat McBride: It certainly -- it factors in to some extent, but I don't -- I would say we're -- I think if we have a great story about the value that's delivered to the end customer, we are delivering a benefit in terms of either savings in some other form to the end customer in a way that in lots of markets is still attractive. And so I think if we do our job right, we're going to be able to participate in lots of different markets, and that would be what I'd say our focus is more than trying to predict where the economy is going in the next 2 weeks, 10 months and beyond.

Michael Turrin: Makes sense. We're down to the last one, I think. And so I'm just -- I'm curious what you think we'll be talking about in three years. We have this conversation again. Are you still running at the same rapid pace? Like what-

Mat McBride: I hope so.

Michael Turin:  What do you see on the horizon, if we're thinking from a three to five-year perspective...

Mat McBride: Yes. I think as a maybe tech nerd/finance person, like I'm really, really excited to see the new innovation that will come from a more and more agentic world. And what will happen to make the ease of adoption be something that we all experience in our day-to-day usage of these things. And what are the cool innovations we're going to do to come up with new ways to monetize and deliver that more efficiently to shareholders? And so I'd like to say, I feel like I have one of the best jobs in the industry. I feel very grateful. Like I've already told myself 10 years ago.

Michael Turrin: Very interesting perspective. Yes.

Mat McBride: In my job, like I wouldn't have believed you. And it certainly a younger version of me would have been super surprised, but I feel very, very lucky to be part of it. And it is exciting every day. I mean, I'm a very interesting balance of exhausted and excited all at the same time because where else do you get to deal with these kind of huge challenges and opportunities at the same time. And so I'm really excited about where the future is going to go. I think it's going to be a fundamentally different place than what we're talking about, when you talk about three to five years from now.

Michael Turrin: That's great. Thank you for sharing some of the excitement.

Mat McBride: Yes. Thank you, Michael.

Michael Turrin: Making the trip down the coast. Very much appreciate it, Matt. Yes. Thanks for the time.

Mat McBride: Pleasure to be here. Thank you.

Microsoft Corp (MSFT)

ar2025

2025 ANNUAL REPORT

VIEW ONLINE 

DOWNLOAD NOW

 

Follow us
Share this page