Skip to main content

Ethical AI: 5 principles for every business to consider

As we near the end of 2019, there’s a lot to reflect on in the world of data and AI – from the tools available to us to how we use them in an ethical way. I spoke to some of these reflections at TechUK’s Digital Ethics Conference earlier this week.

Phil Harvey speaking at TechUK Digital Ethics Conference

For those of you who weren’t at the Digital Ethics Conference, you can think of data and intelligence as the beating heart of digital transformation. It gives us new ways of knowing and new things to know. As you learn how to empower you employees, engage customers, optimise operations, and transform your products, data gives you the essential digital feedback that guides your decision-making. This could be anything from customer or employee feedback to product telemetry or CRM data.

Digital feedback loop visual

New tools = new responsibility

With tools like Azure Cognitive Services, pre-built models using APIs covering text, speech, and vision mean it’s never been easier to implement AI and harness the power of data around us. But it’s not simply about what you can do with AI; it’s about what you should do with it. If we take the example of facial recognition, the technology uses a form of is a form of personally identifiable information (PII). You must have the active consent of those whose faces you process if you’re to use this example of AI in an ethical way.

The principles of ethical AI

When it comes to AI, it’s important to understand the principles under which your organisation operates. At Microsoft, our AI principles are very clearly stated, and Brad Smith has actively discussed the need for regulation when it comes to the use of facial recognition technology.

Here are five key principles for every business to consider when it comes to implementing responsible and ethical AI…

Visual showing Microsoft's ethical AI principles

1. Fairness

This principle relates, in part, to human unconscious bias. Humans operate under many kinds of unconscious bias as way of shortcutting their decision-making. We have to work hard to identify these and personally learn to correct them. When a machine is learning based on data about human activity, it can capture this bias and store it within the model it generates. This can lead to the AI system using that model to make decisions to copy or amplify unfairness. While it may at first appear innocent, where you live contains implicit information about you. Letting a model use, say, your post code, to decide outcomes is unfair.

2. Safety and reliability

One of the examples to focus in on here is what’s known as ‘automation bias’. This is where a person expects an automated process or computer to be infallible. Examples of this include tourists ‘believing’ SatNav systems and driving into the ocean, or falling asleep at the wheel of a self-driving car that subsequently crashes into an unrecognised person or obstacle in the road. We rely on machines too soon because they do amazing things for us. If you’re automating things for your user, have you considered how you keep them safe when they fall foul of automation bias?

3. Privacy and security

I have mentioned the need for regulation in facial recognition. Laws such as GDPR require users to actively consent to organisations using their PII, and for clearly stated reasons. Are you actively gathering this consent? From a security perspective, new technology opens up new attack vectors for your organisation. If you’re deploying facial recognition, are you sure that it’s reliable enough to pick up everyone?

 

Photo showing Phil Harvey using facial recognition at an event

 

This is an example of me at an event in London. I grew my beard so long it became adversarial to AI and I was not recognised as a person. No one knew I was there.

If you’re not inclined to grow a beard, you could look at your makeup options. Or maybe print some new glasses?

4. Inclusiveness

Facial recognition AI that hasn’t been trained or tested with the appropriate length of beard is open to adversarial beard attacks. Poor testing or training data also leads to the possibility that what you create will exclude people. At Microsoft, inclusiveness is a core principle for us – just take a look at our Xbox Adaptive Controller as proof.

5. Transparency and accountability

The two foundations of Microsoft’s AI principles are transparency and accountability. Transparency is the principle that if a decision is going to be made by an algorithm, that algorithm should be explainable. In machine learning the accuracy of a model (how well it performs) is often higher when the result is less transparent. The reason for this is that a model is a derived piece of mathematics. It’s derived from the data examples in training. The simpler the maths, the more transparent. In the case of deep learning, the mathematics of the model are almost impossible to understand by any individual. Transparency goes hand in hand with accountability. If there’s a negative outcome from an AI system, who’s going to take responsibility for this negative outcome? The nearest person? The company that developed the solution? The user?

 

When developing your AI solutions, it’s useful to use these examples to inspire you to dig deeper into your own principles for ethical AI and consider how you can mitigate negative impact before the issues arise.

Find out more

Learn to develop AI solutions responsibly

Download the report: Accelerating competitive advantage with AI

Find out more about AI and its role in society 

How to adopt AI at scale – the right way

About the author

Headshot of Phil Harvey.

Phil is a big, beardy data geek who loves working with data and solving interesting data problems. He’s especially interested in soft skills for technical people in data, empathy, ethics and in the impact of data on what people know and how they know it. Starting his career with a BA in AI, Phil has worked in a wide range of industries from architecture to advertising. He was also CTO and technical founder of a data start-up. Phil now works at Microsoft as a Cloud Solution Architect for Data & AI in One Commercial Partner.