Inside the Microsoft AI Residency Program with Dr. Brian Broll

Published

Dr. Brian Broll

Episode 89, September 11, 2019

In 2018, Microsoft launched the Microsoft AI Residency Program, a year-long, expanded research experience designed to give recent graduates in a variety of fields the opportunity to work alongside prominent researchers at MSR on cutting edge AI technologies to solve real-world problems. Dr. Brian Broll was one of them. A newly minted PhD in Computer Science from Vanderbilt University, Dr. Broll was among the inaugural cohort of AI residents who spent a year working on machine learning in game environments and is on the pod to talk about it!

Today, Dr. Broll gives us an overview of the work he did and the experience he had as a Microsoft AI Resident, talks about his passion for making complex concepts easier and more accessible to novices and young learners, and tells us how growing up on a dairy farm in rural Minnesota helped prepare him for a life in computer science solving some of the toughest problems in AI.

Related:


Transcript

Brian Broll: My first project was focused on actually trying to get more human-like behavior in Minecraft, which was leveraging a scripted agent and very few human demonstrations. So, given like thirty-three human demonstrations and an existing scripted agent, the question was, how can we incorporate some sort of learning into the agent, not to necessarily make it perform better but to make it more engaging and interesting, and hopefully more human-like?

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: In 2018, Microsoft launched the Microsoft AI Residency Program, a year-long, expanded research experience designed to give recent graduates in a variety of fields the opportunity to work alongside prominent researchers at MSR on cutting edge AI technologies to solve real-world problems. Dr. Brian Broll was one of them. A newly minted PhD in Computer Science from Vanderbilt University, Dr. Broll was among the inaugural cohort of AI residents who spent a year working on machine learning in game environments and is on the pod to talk about it!

Today, Dr. Broll gives us an overview of the work he did and the experience he had as a Microsoft AI resident, talks about his passion for making complex concepts easier and more accessible to novices and young learners, and tells us how growing up on a dairy farm in rural Minnesota helped prepare him for a life in computer science solving some of the toughest problems in AI. That and much more on this episode of the Microsoft Research Podcast.

(music plays)

Host: Brian Broll, welcome to the podcast!

Brian Broll: Thanks, happy to be here.

Host: So, this one’s going to be different. You’re not actually in Microsoft Research proper, you’re part of a really interesting new program. It’s called Microsoft’s AI Residency Program and you’re part of the very first cohort, right? So, 2018 it started the first year. You’re just finishing up?

Brian Broll: Yep.

Host: Before we begin our “regularly scheduled programming,” give us a bit of an overview about the residency program. Tell us what it is, what’s cool about it, and actually, why you did it.

Brian Broll: Sure, so the residency program is a year-long, fixed-term position where you come in and you work on two different six-month projects and the applicants come from a variety of different backgrounds. A lot of them with a computer science background, but some even from like quantum physics and some people with more of an NLP background, some people with ethics and psychology and cognitive psych. So, there’s a variety of different backgrounds and a variety of different degree levels and people come in and get to work on all sorts of different projects for, again, two six-month periods.

Host: Right. It’s its first instantiation of a residency, when I hear that I think of medical residencies, right, but you’re a doctor, but a different kind!

Brian Broll: Yeah, yeah, I don’t’ usually operate, or at least I try not to.

Host: Only on data.

Brian Broll: Yes, exactly, exactly. I mean there are some things that make a lot of sense about doing a residency. I would suspect that one of the benefits is, certainly, that if they have people here working on research projects with a lot of people at MSR and making a lot of these really great connections, and then they move to, like, some other product team or stick around at Microsoft or even if they returned back to like academia to, uh, in some cases pursue a PhD, in other cases come on directly as faculty, it certainly facilitates a lot of these relationships and collaborations either within, again, product teams and MSR, or potentially within other academic institutions and MSR.

Host: Well at the beginning of every podcast I ask my guests, “What gets you up in the morning?” You’re more the beginning of your career than most of the people that I have in the booth, so I’m going to tweak the question a bit and ask you what’s on your research roadmap. What kinds of questions excite you and, if you could encapsulate it succinctly, what’s the big goal of your research?

Brian Broll: Yeah, that’s a good question. In the past I’ve done things with like educational and visual programming environments and trying to make more advanced and complex topics more easily accessible by younger users, or young learners in some cases. And I’ve also done some work in, like, related spaces around, like, developer tools and again just trying to see how you can take advanced concepts and make them something that people can, not just, like, interact with at a high level, but hopefully have a deep and rich engagement in. So that they can really leverage it for whatever sort of task or problem they really care about, you know…

Host: Yeah.

Brian Broll: Ummm! So, that’s definitely a recurring theme. And I have found that, when I work on different projects, a lot of times I end up coming back to this question. Like, even if I’m not trying to target some, like, end user or developer or make something accessible, I always find myself wondering, first of all, how could I solve whatever problem I’m working on, and then one step back, like, how could I get to that information that I need to be able to figure out or solve whatever problem I’m working on, and then, like, how could I do this easier? So, then I often find myself falling into the same train of thought, even when I’m the only person doing it, of like, I really want to be confident that things are working the way that I think they are, or… The questions come in a variety of different forms, depending upon specifically what I’m working on. But it can vary from, how can I really better understand this problem so I can figure out a solution, to, like, how can I really be sure that what I think is happening is actually happening? And when I start going to those types of questions, I always find myself naturally wondering, like, is there a better way that I can, like, fundamentally approach this so that it’s an easier task to solve or, or to be confident in my solution?

Host: We’ll get to the laziness business later because I think that’s an underrated quality for people… Well, let’s talk for a minute about the graduate work you did since you are a, what I would call, “newly minted” PhD and your work is really fresh. So, give us an overview of the work you did to get your degree and tell us a little of how that’s informed what you’re doing now.

Brian Broll: Sure. So, I finished up my PhD a year ago at Vanderbilt and I was working on making distributed computing concepts more accessible to kids. So, basically, we wanted to extend programming environments that are meant to have a really low, like, threshold so that kids can get started programming really easily. So, like, more specifically, I was focused on, like, block space programming environments. So, you might be familiar with like Scratch or Snap. Now, my work was trying to empower the students using these environments with the abilities to build distributed applications. So, this would include potentially fun and, hopefully, more engaging things like chat applications or multiplayer games, or, like, it’s really the sky’s the limit. Hopefully you can give them the tools where they can really express their creativity and learn something cool in the process. So, I think it’s cool when you can appeal to other demographics by making it much more grounded, so it feels applicable and relevant…

Host: Yeah.

Brian Broll: …while also trying to make it more social by giving them the power to build network-enabled applications so they can actually make these games multiplayer or make chat applications, and then start to reason about the natural things that often come up. I’ve been in classes before where I’ve introduced things like client-server architecture. One that comes to mind is, I was working with a group of eighth graders and I introduced client-server architecture as we made a chat application during class. And it’s funny how concepts come up super naturally that are really huge. Like, they were doing a distributed denial of service, unintentionally, immediately.

Host: What?

Brian Broll: Like, I mean it makes sense, you have a classroom of students and I set up a server and they can all see things and naturally you get some trolls in the audience who just want… who find out that, well, they can use this loop that’s just an infinite loop and just keep sending messages to the server and then we have to step back and talk about, how do we detect spam and how do we start filtering this stuff out? But the thing that I want to emphasize about this example is not necessarily that these kids were learning to DDoS but that these concepts come up organically…

Host: Right.

Brian Broll: …in a way that like they can get their hands on it and then start reasoning about how to solve this problem and different techniques and then hopefully then evaluating the different types of solutions that they might come up with.

Host: So, did they even know that what they were doing had a name, an acronym?

Brian Broll: No, no they didn’t!

Host: Did you tell them?

Brian Broll: Yes, I did, yeah.

Host: You just did a DDoS.

Brian Broll: Yeah, it’s funny you have to be pretty flexible when you’re doing things like this because the class can go a lot of different ways.

Host: Oh sure, I’ve taught eighth graders, I know, that’s too funny. Well, how did that work play out and do you find it has legs and you’re building on it?

Brian Broll: Yeah. So that work is a continuing project at Vanderbilt. There are a lot of a different projects and efforts that are building on top of it. For example, a year ago, we had two one-week long summer camps that were sponsored by NSA, that were focusing on cybersecurity in the context of cyber-physical systems. So, more concretely, they were given robots and then we started introducing them to, like, programming, because we couldn’t assume prior programming experience…

Host: Right.

Brian Broll: …so, we first tried to do some basic some introduction to programming, and then I started introducing them to programming on, like, robots and then, after that, started getting into cybersecurity questions. So essentially, grounding this a little bit more, we had a classroom of students and we had, like, a collection of robots and rather than a lot of the existing robotic programming platforms, we didn’t, like, have a single computer have to be like physically connected or associated with this specific robot, it was more like we had a sandbox of students who could interact with a group of robots and they were all assigned a specific one, but this means that kids will start trolling each other and trying to control each other’s robots and things like that. And I think it’s great because this means that, in the way that we design and set up the environment, we can do it in a way that facilitates the, I guess, natural development of some of these sorts of concepts.

Host: Right.

Brian Broll: So, like, encryption comes up pretty naturally then if we’re,like, well, people keep trying to control my robot and I want it to drive but…

Host: How are you going to stop that?

Brian Broll: Exactly. And then we start talking about codebreaking and this adversarial nature of cybersecurity lends itself very nicely to a curriculum too, in the sense that you can introduce one sort of very natural initial fix and then how you can counter that and then how you counter the counter and you know. And you can continue developing along that route when you have students who are changing the encryption on the robot really frequently and trying to prevent replay attacks and all sorts of fun topics.

(music plays)

Host: Well, we talked a little bit about the residency program in general, but I want to talk about what you did while you were here, let’s plunge right into your year at the Microsoft AI Residency Program.

Brian Broll: Yep, so I worked with the reinforcement learning team, and I was working on machine learning in games. So, rather than some of the machine learning where they’re trying to essentially replace a human with an AI, we were focused more on how can we leverage machine learning to be, like, complementary to the game development process? So, not necessarily just trying to use them as a sandbox to show that we can learn complex tasks but say, like, how can we actually benefit games from incorporating machine learning either into, like, the development process or some sort of aspect of the game that complements it later? So, I was working mostly with a couple of people on the RL team here in Redmond but also collaborated a bit with a bunch of researchers out in Cambridge, who are also working on this…

Host: Cambridge, UK…

Brian Broll: Yeah, Cambridge, UK, yes, who were working on a similar effort. So, my first project was focused on actually trying to get more human-like behavior in Minecraft, which was leveraging a scripted agent and very few human demonstrations. So, given, like, thirty-three human demonstrations and an existing scripted agent, the question was, how can we incorporate some sort of learning into the agent, not to necessarily make it perform better but to make it more engaging and interesting, and hopefully more human-like?

Host: Interesting, all right. How did that go?

Brian Broll: We did find some pretty positive results, and we were able to create a hybrid agent that did demonstrate more of the behaviors that we thought were interesting and engaging from the human perspective. So, like, contextual decision-making as well as, you saw similar high-level strategies exhibited by this hybrid agent where it was able to learn the strategies just from the human demonstrations. So those aspects of it were very positive. So there is a lot of flexibility in taking some of these data-driven approaches and we were hoping that we could be able to, when defining a new agent in this game, use scripting for the parts that make sense to code for and then use data for the parts that make sense to use demonstrations for.

Host: Right.

Brian Broll: There are other aspects, or ways to view it, that are less practical and more trying to think about some of the fundamental challenges, and some of the fundamental challenges in this context would be that the scripted agent gave us a hierarchical policy where we could replace, in our context, a meta-controller, so basically the part of the agent that was picking the high-level, like, goals of the agent in this game. And you can assume that humans have a similar type of implicit hierarchical policy. In other words, they’re not just thinking, like, whether or not they should move forward and left, like, without any concern for a high-level plan…

Host: Right.

Brian Broll: …like they’re thinking, like, ah I should go and grab this thing or interact with…

Host: More strategically.

Brian Broll: Exactly, there’s some sort of high-level strategy. Now, one of the challenges in this case was that we were just trying to learn the meta-controller via imitation from the human demonstrations, and the lower-level policy of the scripted agent was, well, of course, scripted. So that meant that that wasn’t able to learn at all. So now the challenging part is that we know that the lower-level policy, things like pathfinding and such, might not match with what the humans are doing, but we still want to be able to imitate, which makes it challenging because you don’t want to try to imitate at a lower level then, like you can’t just look at, like, different states of the game and just try to compare them between, like, the hybrid agent, which you’re trying to, again, train the meta-controller, and the human demonstrations because, given the same goal, they might have different ways of achieving that goal. So really, you want to try to, essentially, predict the human’s intention and then try to incorporate the human’s intention into the meta-controller. So it tries to predict what a human would try to do, given the state that it’s seen so far, and then reuse the scripted part to actually, like, achieve this intention or this sub-goal that it thinks that the human would have. So, it’s a tricky problem…

Host: Just a little!

Brian Broll: Yes.

Host: Well, what kinds of research thinking does that inspire? I mean you’ve got to, then, figure out, okay, how I am going to do this? How are we going to take what’s natural up here and put it in the software?

Brian Broll: So one of the biggest questions, if we’re thinking about trying to, like, approach or tackle this question, is a bit of like, how can you actually find some sort of signal in the data that we can actually use to train the meta-controller? And in this case, it’s not just some signal, I guess it’s really the right signal. So, I gave the example already of, if you just tried to look at states, we could try to imagine just trying all sorts of different possibilities in the meta-controller and seeing which one gets closest to what the human was doing. But the problem there is that you start to, like, encapsulate, I guess, the things like pathfinding and stuff, which will just introduce noise, which isn’t really what you’re trying to measure, because you could imagine that – now this is usually easier to explain with a whiteboard! – but you could imagine that you see some human taking some arc to whatever its goal is. And there’s all sorts of different, like, entities or sub-goals that are possible on the screen, and you could imagine that you have some sort of scripted agent that just goes directly to its sub-goal. Now, if you knew what the person’s intention was, the expected behavior would be an arc in the human demonstration and a direct path in the resulting hybrid agent. But if you’re not careful, and you’re only looking at the low-level states, you would try to force the hybrid agent to take an arc. And you might be able to do this by picking all sorts of garbage sub-goals, really quickly, to make it move directly… I don’t know, a little bit to the right, and a little bit forward, then a little bit back to the left, and kind of create this arc shape, but that’s really not at all what you want. And it will be especially problematic when we start looking at different humans. It’s not that all humans share the same pathfinding logic or low-level sub-policies, so it will get even more complicated and difficult to try to learn. So the type of thinking for this project really required you to kind of step back and try to really understand, fundamentally, what sort of approaches could be used to try to learn, really, what we care about here, which isn’t immediately obvious, how to isolate the aspects of the human demonstrations that we care about, in the context of this hybrid agent. And those parts got a little hairy!

Host: So, another thing you did during the residency was extend the results of the work that you did at the outset. So, what questions were left unanswered, what prompts did you, sort, of gather from what you learned, and how did you go about trying to answer them with follow-up research?

Brian Broll: Sure, so, I mean I think this is a pretty standard process in a lot of research where you want to try to make something work in a complex environment and you might make some compromises on the way, and then you’ll want to step back and see how many of those you can fix or resolve or weaken or loosen. So, one of the things that I’ve been working on during the course of the second project has been trying to, I guess, relax some of the requirements that we had from the first project. Now, specifically, one of the ways that we derived a signal in the first project was to look at the human demonstrations and leverage a labelling function, so a concept that’s been used to drive supervision for large unlabeled data sets, to drive like weak supervision. So, you might get like these noisy labels and things like that, but hopefully, given enough data, you can still learn something meaningful in this context. Now, for the first project, I wrote a labeling function, and just took the most, I guess, direct approach to try and get signal. Like, in this case, I tried to encode the heuristic that, regardless if it’s a human or an AI, we can probably infer what its goal is based on what it’s going to move most toward in the future. So I had a labeling function that would look ahead into the future and then look at what the human moves most directly toward, and then it would say that this is probably a decent approximation of what their sub-goal was, so we can do this for every state, and then, even though there might be some noise in the labels, we can hopefully learn something meaningful enough that reasonably approximates what the human’s intention was.

Host: Sure.

Brian Broll: The thing that I didn’t like about it was that that means that our labeling function has to have a good heuristic and a good idea of what the human was going toward, and especially if we’re going to apply this into other domains, it might be harder for any arbitrary state to give me a good approximation of what the human’s trying to do now. And even though it doesn’t have to be a perfect approximation, there are some sort of domains where this could be really challenging. So, what I’ve been trying to do during this second portion of the project was relax that to the point where, rather than provide an approximate signal or an approximate label for all states, see if we can just go through and, if there’s a state we really are confident about, we label that one. In other words, this is really just trying to ground the human’s intentions, in the context of Minecraft, based on some interaction. So, it’s reasonable to say that if the player picks up, like, some item, they were probably trying to. If they attacked a zombie, or some sort of entity, that was probably their goal. So, instead of worrying about, over the course of five seconds, while it’s doing battle with a bunch of different enemies and interacting with things, like picking up weapons and attacking other entities in the game, rather than trying to predict the label at each point and trying to like disambiguate when exactly, or what exact frame does it start moving from one enemy to another or things like that, or when exactly does it start to retreat or when it is just collecting itself to attack again, rather than trying to disambiguate the labels for those states using a labeling function, so directly, we just tried to relax it to the point where, when we see something that we’re confident about, like again, an interaction in the game, we’ll just label those, and then see if, using this much more sparse set of labels, we can still get meaningful results for predicting a human’s intention. The idea here is that, if we’re able to do some sort of, like, pretraining, and bias the network on some related task, then maybe, using these sparser labels, we can just kind of fine-tune the original weights which were, again, trained on some similar task, like, that could be predicting, like, distances or just something that’s learning more of what the heuristic that the labeling function was encoding, we could, instead, just bias some of these suspicions that we have through the use of training it on related tasks, and then fine-tune it on this smaller set of labels that we can trust with much higher confidence because we actually see some sort of game interaction.

Host: Okay. Well, let’s circle back to the laziness issue. In some way, laziness is the great motivator for invention and we talked about this as sort of a recurring theme in the work you’re choosing to tackle, making complex things easier for people, especially for novices and non-experts, and more accessible in the process. So, can you expand on that just a little bit, why it’s important to you?

Brian Broll: Sure, I mean I can never help the irony, when I find myself sitting at the computer and doing repeated tasks, especially when a huge part of programming is about automation.

Host: Right?

Brian Broll: I mean, I guess a little bit more, maybe, philosophically I don’t really like when the tools that I’m working with get in the way of the problem I’m solving. Like, they should really be complementary, and I shouldn’t be spending time trying to, like, I don’t know, wrangle or fight with the tools themselves. I guess that ideology kind of pushes me towards, like, umm… or gives me very little patience for when I’m doing something, some repeated task manually, over and over, and definitely, like, pushes me towards finding some way to not only remove or reduce any repetition, but also see if there’s a way that the tools can really hopefully get out of the way so that the person can actually try to reason about and understand the problem that they’re actually trying to solve. I don’t want to confuse this with trying to find, like, a silver bullet. I’ve run into this a bit, actually, in grad school when people sometimes would debate about, like, visual programming versus, like, textural programming languages, or, like, whether or not block space programming is like a real programming language. Which, which I don’t know, I mean, personally, I feel like there’s syntax, there’s semantics, it falls under the category of a programming language, even though most of the time the point that they’re making is really, do you use it for like industrial grade applications, which, no. The ones that I’ve used, and the ones that I’ve written, they’re… I think their strength is in, making it easy to access some of these complex topics and concepts, and be able to learn really valuable things, but I’m not recommending that we go and write performance-critical applications in these languages. But I do think that it’s funny when people kind of get this idea of it having to be one or the other because I think really the best set of tools are ones that play well with each other. I guess what I should say, it’s not like every carpenter starts with just, like, a hammer, you know, like, and then tries to build a house entirely with a hammer.

Host: You need a whole full set of tools…

Brian Broll: Exactly. You need a huge tool set and you need to make sure these all, like, complement each other and work together nicely.

(music plays)

Host: Well, I can’t let any podcast go by without asking what could possibly go wrong. And this is actually my thing because I’m that person in eighth grade who would have tried to hack someone’s else’s robot just because, you know… Do we all use…

Brian Broll: It’s fun??

Host: …our powers for good? Some of us use them for evil. Just saying. Umm. Not full-on evil but just, you know…

Brian Broll: Yeah, yeah, certainly…

Host: Anyway. So, is there anything about the work that you’re doing that keeps you up at night? Any potential downside? And if so, how could you, from the get-go, build in safeguards to mitigate that?

Brian Broll: Yeah, so it’s a really good question. I mean a lot of things I really liked about the work that I’ve done in the past, and this gets a little bit more into my, I don’t know, maybe preferences and ideologies about, like, software and ecosystems and community and all that good stuff, but I really like when these sorts of tools and concepts can be really accessible to people across the board. You know, like, in undergrad, I was a math major and I got a teaching licensure, and my first teaching placement was at a school in need of assistance, so, like, a very large minority demographic. Some of them come from, or would come from, like really rough childhoods and really unsafe home environments and things like that. One of the things that I really enjoy about a lot of the work that I did at Vandy in trying to make these more complex topics accessible, and some of the things that I really like about open source software in general, is this idea of, basically, being able to give people the resources, or the abilities, to be able to get involved and to learn a lot without any regard to, like, a lot of the social issues that can get in the way. Now, that being said, a lot of the work that I’ve been doing has been, like, publicly available. It’s free. If people want to use it, they can just use it. But this does go hand-in-hand with potential misuse. Now, in my case, since I’m mostly trying to empower people to be able to do things that they couldn’t do before, it’s not necessarily a social platform so there are different types of risks and issues, but it is challenging. I mean, just because you teach people some of these like cyber-security concepts, it doesn’t really guarantee that they won’t try to be on the hacker side rather than you know, like, the white hat side. I haven’t really built in any safeguards right now. I’m concerned about taking away from the people who could really benefit from it, and who already have enough challenges. So that it makes it really challenging. I always hope that these types of things can be addressed with, ideally, just more of developing, like, a positive culture and community and people buying into that. Because you hope that people aren’t only doing the right thing just because it’s not against the law, or it’s the only thing not against the law… you know? I mean, like, I’d rather people are motivated to help the community and get involved, and when doing things like that, I really don’t want to have anything get in the way, you know. Those are the kinds of scenarios where I feel like I just want to, like, if we can pave the road as much and as far as possible, and then hope we can build a community around this, and an ecosystem where people want to do good to the people around them. But I realize that there will be cases where people might fall through the cracks and this might be a little bit more ideological, but that’s what I shoot for or strive for.

Host: Right. All right. Well I always ask my guests to get us a little bit of their personal story and you’ve talked a little bit about what you’ve done prior to this with your doctoral work and a little bit about the residency program. But give us a little more context on your background, where you’re from, and how you ended up in the Microsoft AI Residency Program.

Brian Broll: So, sometimes it’s funny looking back, things can seem a little bit arbitrary. I mean, I grew up on a dairy farm in Minnesota. I wasn’t sure if I wanted to go to college or not, but other people seemed to be, and then another wrestling coach was recruiting me so that made it easy. Then I got an academic full ride which also made it easier. So, I decided to go for my bachelor’s in math. Then I was planning on going back and teaching high school math and coaching wrestling… and I had taken a couple of computer science classes. I took, like, the intro course and a data structures course and a programming languages course… and then found out… there had been a few professors in undergrad who had asked, like, why I wasn’t planning on going for, like, my PhD or going to grad school or anything like that, and I always said I didn’t really want to invest any more money in an education if I didn’t see how it would pay out. So, I was a bit more, I guess, pragmatic perhaps, given my background and lack of exposure to those kind of things before. But then my senior year, one of the professors was asking in more detail about why I don’t want to go and stuff. And then when I mentioned that I didn’t want to invest more money in an education if I didn’t see, necessarily, the payout, she said that a lot of PhD programs are covered, and you can get like a stipend on top of it and, and then I figured I might as well apply.

Host: Yeah.

Brian Broll: So, I had enjoyed doing stuff with computer science so I thought I would apply to computer science PhD programs even though I didn’t major or minor it in. And then, I actually heard about the Microsoft AI Residency while interviewing on site for the Google AI Residency…

Host: Oh, really!

Brian Broll: …and then came back and applied to this one! So, sometimes it’s funny looking back because sometimes the path to get here can seem pretty arbitrary but…

Host: Non-intuitive…

Brian Broll: Yeah, definitely, but I’m excited to be here.

Host: Well, on the heels of that question, my new favorite one is the one I’m going to ask you next. What’s one interesting thing about you that we couldn’t find on a web search, maybe, that has influenced your career?

Brian Broll: Hmmm. So, I mean, I bring up wrestling, but when I was student teaching and running, like, studies, the students liked to Google me and find pictures of me wrestling and then set it as like their background or incorporate into their games. So, you can definitely find that on a web search. But it’s hard to really give enough credit to some of the experiences that you can have that really play a role in your career development. Like, although being a dairy farmer and growing up on a dairy farm doesn’t really seem to be, maybe, the most closely related in doing research in AI and machine learning, I think there are certainly a lot of different attitudes and perspectives that can be really positive. And I think that, in some ways, it can keep me very pragmatic in the sense that I really like seeing some, like, outcome or the benefit of what I’m working on, and I think, in those ways, like, being able to work with kids and just trying to help give them all the tools to be able to succeed, regardless of background, is something that can have some sort of positive impact. And I think that some of the experiences and the pragmatic nature growing up on a farm played a little bit of an influence in keeping me grounded that way.

Host: My husband’s father used to say, cows never go on vacation.

Brian Broll: That’s true, that’s true! My wife and I have been together for twelve years. Basically, I got a driver’s license and a girlfriend. So, like when we were dating, a lot of times it was hard for me to get out of chores and things like that, so we spent a lot of dates milking cows and feeding cows and doing all sorts of things like that!

Host: Brian, this has been so much fun. As we close, you’re much closer to the beginning of your career than many of the guests I’ve had on this show, so, I want to put a little different spin on the “what advice would you give to emerging researchers” because you’re kind of one…

Brian Broll: Yeah!

Host: …and I’ll simply ask you, what’s next for Dr. Brian Broll?

Brian Broll: Yeah, so I’m going to be a research scientist at Vanderbilt starting in September and I’ll be continuing some of the work, try to increase the impact of some of the work that I did during my PhD, as well as explore ways that we can try to make deep learning more accessible, especially to people in the natural sciences. So, I’ll be working on those immediate projects, but I’m definitely interested in seeing other ways that we can try to combine some of my experience here. We’re getting more involved with AI and machine learning with some of the perspectives and kind of driving ideas that I’ve had throughout grad school and about trying to make these tools more powerful and accessible so that hopefully it can have a bigger impact more broadly. So…

Host: Let me ask you a question I was going to ask before, but I really do want to hear what you think. What was your favorite thing about the residency program and what would you advise people that might be interested in ending up at Microsoft’s AI Residency Program?

Brian Broll: So, my favorite thing is probably the, like, diversity of research projects that are out there and being able to work so closely with so many very impressive researchers in so many different areas. I think it’s really great to be able to have that sort of exposure, especially when you’re trying to learn more about different aspects of AI and machine learning. It’s really hard to replace being able to work so closely with so many different researchers here at MSR.

Host: Brian Broll, thank you for joining us today. It’s been illuminating… and delightful!

Brian Broll: Thanks, it’s been a pleasure.

(music plays)

To learn more about academic programs with Microsoft Research, visit Microsoft.com/research

Continue reading

See all podcasts