Globe with pictures of people, global communication network

Microsoft XC Research

Want to build trust in your AI? Here are 3 mistakes to avoid

Share this page

By Penny Collisson (opens in new tab)

A brain holding hands with a robot, and a heart between them

 

No longer the stuff of science fiction, artificial intelligence (AI) is everywhere today. But, if we want AI to really benefit people, we need to find a way to get them to trust it, because people won’t use technology they don’t trust. Despite growing advances in AI tech, people still deeply lack confidence in its predictions. Given the option to trust a human over a machine, they’ll trust a human time and time again (opens in new tab), even if the human is clearly wrong (opens in new tab) 

This is in part because AI is often working behind the scenes, so many people simply aren’t tuned into the times it works. Instead, the moments it goes woefully wrong leave a lasting impression: an upstanding citizen being misidentified as a criminal, or items being mistakenly ordered by smart speakers. 

We have our work cut out for us if we want to deliver AI in a way that leaves people trusting it. Some answers have come from recent research*, which is helping to design intelligence that will integrate seamlessly into people’s lives.  

chart showing trust in intelligence building over time

The trust curve 

The trust curve (opens in new tab) (left) highlights three important stages of trust development. At any one of these stages it is possible to either retain or lose users due to a breach of trust: 1. Before use, 2. First moments, 3. Repeat use. 

Similarly, there are opportunities to ‘do it right’ at each stage:  

  1. Before use, set positive and accurate expectations. People evaluate experiences relative to what they anticipate. Expectations can be positive or negative; accurate or not. If expectations are too low, people won’t engage. If expectations are too high, people will quickly feel disappointed and lose trust.
  2. During the first moments of the experienceset a high bar for value and relevance. These first moments are critical for building trust. If the experience isn’t immediately seen as valuable, people won’t use it. If a trust breach occurs early, curiosity and trust drop off fast, and people are very unlikely to use the experience again. Start where your AI is more likely to be right: Algorithms aren’t perfect, and there are places where mistakes are more critical than others. While the AI is learning, it will make mistakes, perhaps doing things the user doesn’t expect. So don’t try to program high value tasks at this stage. Once there’s some trust established, take risks when the user’s relationship with it is farther along. 
  3. Reward repeat usage by making the experience more personal over time. The simple act of using something makes it feel more familiar, and breeds trust. Once you get people engaging beyond the first moments, the next step is to keep them investing. 

What’s next 

We’re making an ambitious promise with AI. We want it to be personal, contextual, anticipatory, self-adapting, and compelling. We want to strike the right balance between using data that creates intelligent, personal experiences while maintaining customers’ security and privacy.  

At Microsoft, we have a centralized approach to responsible AI, led by Microsoft’s AI, Ethics, and Effects in Engineering and Research (AETHER) Committee and its working groups, along with our Office of Responsible AI (ORA). Together, Aether and ORA work closely with our responsible AI advocates and teams to uphold Microsoft responsible AI principles in their day-to-day work. Read more here (opens in new tab). 

As with any relationship, it all starts with trust.  

* There were three phases of research used to glean these findings:  

  • An extensive literature review regarding AI and trust 
  • Qualitative interviews with Microsoft Office users based on work tasks 
  • Quantitative validation through surveys regarding work tasks, also in Office 

What do you think? What best practices and observations would you add to this in your experience with AI? Will these ideas help you in your next AI project? Let us know! Tweet us at@MicrosoftRI (opens in new tab) or like us on Facebook (opens in new tab) and join the conversation.

Penny Collisson leads a team of passionate researchers working on AI and platform capabilities across Office.