• 3 min read

Introducing Copilot Labs and Copilot Vision

Copilot
AI is still a technology very much in development. The pace of change and diffusion alike are among the fastest in history.   The more we listen to people who use our experiences, the more we know how transformative this revolution is, but we also understand that it requires careful and deliberate steps forward. We want

AI is still a technology very much in development. The pace of change and diffusion alike are among the fastest in history.  

The more we listen to people who use our experiences, the more we know how transformative this revolution is, but we also understand that it requires careful and deliberate steps forward. We want to harness the full capabilities of AI Copilots to provide the most useful and rich experiences for users but do so in the safest ways we can. We want to balance both pioneering features and a deep sense of responsibility.  

That’s why we are rolling out Copilot Labs. Before releasing our most advanced tools to all users, we are trialing them for a small subset to gather feedback, learn, and then applying these lessons back into the product – making them at once, we hope, better and safer. Available to Copilot Pro users, think of it as offering a glimpse into “work-in-progress” projects just around the corner.  

The first feature available in Copilot Labs is Think Deeper which gives Copilot the ability to reason through more complex problems. Using the very latest reasoning models, it can help with anything from solving tough math problems to weighing up the costs of managing home projects. Think Deeper can take more time before responding, allowing Copilot to deliver detailed, step-by-step answers to challenging questions. Think Deeper begins rolling out today to a limited number of Copilot Pro users in Australia, Canada, New Zealand, the United Kingdom and the United States. 

Next up: Copilot Vision.  

A real limitation of the Copilot experience has been that it hasn’t understood what you’re doing or what you’re looking at it. Language is, of course, a powerful means of communication, but there’s so much context to any task or interaction that lies far beyond what you can say. One way of addressing this is by giving you the option to let Copilot see what you see. That’s what we are trialing on a limited basis in Copilot Labs with Copilot Vision.  

Here your Copilot sits within Microsoft Edge. If you want it to, it can understand the page you’re viewing and answer questions about its content. It can suggest next steps, answer questions, help navigate whatever it is you want to do and assist with tasks. All the while you simply speak it to in natural language. It’s a new feature that takes the ease and practical use of Copilot to a new level.  

In building this new feature, we’ve put the interests of users’ and creators’ at the heart of everything. It’s an entirely opt-in feature, only getting invoked when you decide it might be helpful. You decide strictly when, how and if you want to use it. In this preview version absolutely none of the content – audio, images, text, your conversation with Copilot – will be stored or used for training. In fact, as soon as you close the feature it gets permanently discarded.  

We are currently blocking the service on paywalls and sensitive content, limiting use to just a pre-approved list of websites in the first instance. These are popular sites used by a broad swathe of the population, and when Vision interacts with them it respects their machine-readable controls on AI. Over time we look to add more, always with safety and responsibility firmly in mind. We are also producing Vision with a view to creating a new driver of traffic to websites. When it comes to a paywalled site, Copilot Vision simply won’t comment. Copilot Vision is also not engaging directly with the web; it’s there to answer questions rather than take actions.  

Have a look at the in-depth Q&A, to see more about safeguards that are woven throughout. In building Copilot Vision, and Copilot Labs more widely, we’ve kept the interests of users’ and creators’ top of mind to balance utility and responsibility. But we will be listening carefully to what you think about this experimental feature. At first only a subset of Pro users in the United States will have access. We hope you find it useful and look forward to hearing what you think!