“There’s a great deal of interest and enthusiasm about the construction and use of large-scale neural language models. We’re seeing new capabilities—and they’re being pressed into service in exciting applications. However, these models, built in a self-supervised manner from massive corpora, can generate offensive, biased, and costly output. We need to better understand these behaviors and to develop methods for mitigating harms. Considering both the value and potential costs of AI innovations, and developing best practices for addressing the risks, is central in the responsible development and fielding of AI technologies.”
Eric Horvitz, Chief Scientific Officer, Microsoft
“As AI models are becoming more powerful and reaching a large section of population either directly or indirectly through Microsoft’s products and services, it is becoming pertinent to have the best minds of the world look at the impact these models can have and identify mechanisms to improve upon them. We believe strongly in improvements through collaboration and open research. We are excited about the potential contributions these new research collaborations can make, both to Microsoft and to the open research community interested in large-scale, language-centric models.”
Saurabh Tiwary, Vice President & Distinguished Engineer, Microsoft
Program Description
Microsoft is committed to the responsible development and fielding of AI technologies including careful deliberation about the value and costs of harnessing large-scale neural language models. These models have been delivering breakthroughs in language capabilities, but have also been found to have the ability to generate language fraught with bias, toxic language, and denigration.
We have created the Microsoft Turing Academic Program (MS-TAP) as part of our program to share Microsoft advances with Microsoft’s Turing family (opens in new tab) of natural language models in responsible manner. MS-TAP provides leading academics and researchers with a private preview of Turing models. Our goal is to engage our colleagues on shared interests, with a goal of better understanding model behavior, identifying novel applications, exploring and mitigating potential risks and mitigations, and improving future models.
Program participants collaborate closely with Microsoft Turing scientists as well as domain experts contributing to Microsoft’s on ethical and responsible AI. As concerns may come to the fore with AI advances, we seek to spend time and effort to better understand the capabilities, benefits, and costs of AI technologies in advance of general releases to the public. We take a stepwise approach to releasing the technology per our dual goals of sharing our technologies broadly and ensuring that AI models are used responsibly and safely in the open world.
Specific goals of MS-TAP include the following:
- Stimulate new, high-impact research on model behavior: Explore the application of Turing language models in a variety of downstream tasks that span various domains of machine learning. Such work expands knowledge of how the models are likely to perform when released more broadly.
- Explore inadvertent and unintended outcomes: Identify and evaluate the spectrum of potential unintended outcomes posed by Turing models. Generate new insights into the different types and severity of potential unintended outcomes that the models present.
- Develop practices and tools that can mitigate harms: Identify ways to mitigate potential negative, unintended outcomes. Mitigations can use existing tools and practices as well as recommendations for creation of specialized new tools and practices.
This program will be accomplished in phases—currently in Phase 1 we are testing the Turing Natural Language Representation model. Upon successful completion, we will add future phases that will involve larger and more complex models.
Nos collaborateurs
We are working closely with the following research collaborators: