Skip to main content
Industry

How to adopt AI at scale – the right way

Blogger Series - How To Adopt AI

AI is increasingly becoming a core technology for companies. Today, we are in the early stages of understanding what AI systems will be capable of. Right now, AI is very good at recognising photos, such as identifying people to tag on social media; or words in voice commands for a chatbot. We’re seeing business of all industries adopt AI to transform their customer experience. But we are a long way from having systems that have the general ability to understand the world, use judgement, or be creative, as our research into the UK’s current AI scene, ‘Accelerating competitive advantage with AI‘ shows.

Growth of AI

AI already plays a significant role in many people’s lives, and this is expected to grow. In June 2018 we polled over 1300 people across the UK on their views on AI.

88 percent of those we polled were familiar with the term ‘artificial intelligence’. 79 percent agreed that computers and technology have become smarter in the last five years.

Of those polled, 29 percent described AI as already useful to them. 46 percent expected it be useful in five years’ time.

At the same time, a significant proportion of those we talked to were already making extensive use of smart speakers and virtual assistants, integrating them into their daily lives. Despite this, they didn’t realise they were already making use of AI solutions to answer questions on the weather, transport or who got through the Britain’s Got Talent show the previous night

AI is increasingly becoming a core part of the technology toolkit available to almost every organisation large and small. It is fast becoming a crucial component to remain competitive. We are seeing more interest and deployment of AI solutions. However, it is equally important that organisations rolling out AI carefully consider the ethical and societal consequences of their decisions.

Not just a story anymore

Over half of the public claim that the most common place they hear about AI is through fiction. This technology has the power to disrupt, cause harm, or do good on an unprecedented scale.

Stories like 1984, Ex Machina, or I, Robot undoubtedly cast light on genuine concerns on the importance of protecting privacy or the challenges in designing safe or reliable AI. At the same time, fiction can also easily mislead public understanding of how AI works, and how sophisticated its current ability is today.

AI

AI for good

Reliability and accountability are both crucial to ensure that AI technology is successfully and sustainably deployed and works equally well for everyone.

We shouldn’t be afraid of this new world. Or that we can’t solve issues of liability as a society. We did this when the motor car became widespread. We came up with rules, codes of conduct, and insurance for protection.

Photography depicts the development and design of AI applications.

At Microsoft, we believe taking a human-centred approach is important when you’re looking to adopt AI. It isn’t designed to replace us. It’s designed to extend our capabilities, allowing us to be more creative and innovative

What matters is we have agency over AI. That we acknowledge where to go when things go wrong, and that mistakes can be corrected.

Regulators also have an important role to play here. A risk based approached that is focused on outcomes rather than technology, can encourage and support innovation. For example, in the financial services space companies already take account of anti-discrimination regulation. The regulatory regime simply needs to ensure all regulated businesses understand that this approach is technology neutral and applies equally to existing solutions and those being developed with AI at their core.

AI principles

AI systems are getting more sophisticated and are starting to play a larger role in people’s lives. It’s imperative for companies to develop and adopt clear principles that guide the people building, using and applying AI systems.

Among other things, these principles should ensure that AI systems are fair, reliable and safe, private and secure, inclusive, transparent, and accountable. To help achieve this, the people designing AI systems should reflect the diversity of the world in which we live.

At Microsoft, we believe AI should embody these four principles:

A list of the AI principles

We have also created the AI and Ethics in Engineering and Research (AETHER) Committee. AETHER brings together senior leaders from across the company. They form internal policies and respond to issues. Its aim is to ensure our AI platform and experience efforts remain deeply grounded in our core values and principles and most importantly – benefit the broader society.

One of the ways we are doing this is investing in strategies and tools for detecting and addressing bias in AI systems. AI is a great opportunity. But we need to ensure we always act responsibly for our customers and partners.

AI skills

Contextual image of woman touching screen while working on Black Surface Laptop 2 inside at deskAt Microsoft, we see skills and education as driving tech intensity and AI. And not just about the technical side. It’s important to drive soft skills that help drive innovation and help us make decisions on how to be ethical, responsible, and adopt AI in your organisation in the right way.

We also see ourselves as the technology partner that will also help organisations and partners build their own capabilities. This drives trust, as well as lift the skills base everywhere. We have various free courses and education such as:

AI Academy

Our AI Academy pulls together a collection of courses and learning resources to help you develop the skills you need to work with, and adopt AI so you can fully embrace it’s potential – whether that’s to increase your productivity or create a stronger customer experience.

[msce_cta layout=”image_center” align=”center” linktype=”blue” linkurl=”https://www.microsoft.com/en-gb/athome/digitalskills/exceed/” linkscreenreadertext=”AI Academy” linktext=”AI Academy” ][/msce_cta]

Digital Skills

Within the next two decades, 90 percent of jobs will require some level of digital proficiency, while the shortage of technical skills continues to grow. However, while there’s a growing need for digital skills, our own research and experience has highlighted an increasing cloud skills gap.

[msce_cta layout=”image_center” align=”center” linktype=”blue” linkurl=”https://www.microsoft.com/en-gb/athome/digitalskills/improve/” linkscreenreadertext=”Improve your digital skills” linktext=”Improve your digital skills” ][/msce_cta]

Correct adoption = better business

AI is a vital step for organisations if they want to succeed in the future of work. From optimising operations and transforming products, to engaging customers and empowering employees, there can be no doubt that AI is set to re-invent traditional ways of working.

However, it must be built on a strong ethical framework with human values at the centre. A framework that protects data privacy, guards against the malicious misuse of AI, and lays out clear guidelines around issues like inherent bias, automation, and where responsibility lies when things go wrong.

Having an ethical and responsible approach when you adopt AI is good for business. Organisations that are investing in establishing the right approach to AI technology now – specifically, by developing underlying values, ethics, and processes – outperform those that are not by 9 percent.

[msce_cta layout=”image_center” align=”center” linktype=”blue” imageurl=”https://www.microsoft.com/en-gb/industry/blog/wp-content/uploads/2019/10/CTA-image.png” linkurl=”https://aka.ms/acceleratingai” linkscreenreadertext=”Link to Microsoft’s research into AI in the UK” linktext=”Download the full AI report” imageid=”17871″ ][/msce_cta]

 

Find out more

Maximise the AI opportunity

About the author

Kate RosenshineKate currently leads the Data and AI Cloud Solution Architecture team for Financial Services at Microsoft UK, helping organisations shape their data strategies in a scalable and responsible way. Her main focus lies in the intersection between technology and business, how data can shape organisations and AI systems. Prior to joining Microsoft, she worked at start-ups where she focused on leveraging big data and behaviour analytics to augment decision making.

Kate comes from a background in scientific research, specialising in neurobiological genetic engineering. During her research, she studied the influence of genetics on behaviour and survival mechanisms. She holds a MSc in Molecular Biology from Bar Ilan University and a MBA from Tel Aviv University.

 

This blog was written in collaboration with David Frank (Government Affairs Manager) and Tom Morrison Bell (Government Affairs Manager).