Microsoft Turing Academic Program header

Microsoft Turing Academic Program (MS-TAP)

The program includes collaborative projects with academia to stress test large, natural language models.

Round 1 proposals: Natural Language Representation Model

  • University of California, Berkeley: Bin Yu (opens in new tab) (PI), Chandan Singh (opens in new tab) (PhD student), Wooseok Ha (opens in new tab) (Postdoc), Briton Park (opens in new tab) (PhD student), Robert Netzorg (opens in new tab) (PhD student)

    Departments: Department of Statistics, Department of Electrical Engineering and Computer Sciences, and Center for Computational Biology, UC Berkeley Department of Urology

    University of California, San Francisco: Anobel Odisho (opens in new tab) (PI)

    Microsoft: Saurabh Tiwary, Peter Potash, Ali Alvi, Kris Ganjam, Alina Stoica Beck, Dean Carignan, Forough Poursabzi Sangdeh

    Recent language models, such as Microsoft’s Turing Natural Language Representation (TNLR) models, have shown an impressive ability to capture semantic information useful for transferring to new tasks. Much of human medical intelligence is contained in medical language notes such as pathology reports. Automated data extraction from such notes will be a key driver for the delivery of precision medicine (e.g. providing patient-level cancer grade information for treatment selection). It is also important for clinical research, risk stratification, and clinical trial enrollment. We propose to evaluate these language models in comparison to our current methods to improve performance in natural language processing tasks relating to automated data extraction.

  • Brown University/LUNAR (opens in new tab): Ellie Pavlick (opens in new tab) (PI), Michael Littman (opens in new tab) (faculty), Roma Patel (opens in new tab) (PhD student)

    Microsoft: Saurabh Tiwary, Ali Alvi, Pooya Moradi, Dean Carignan, Forough Poursabzi Sangdeh

    Recent studies have shown that neural language models (LMs) trained on large text corpora can encode societal biases. Such models risk generating toxic text, whether in response to deliberate manipulation or seemingly innocuous natural language prompts. This fact is especially worrying in scenarios where these language models form the basis of NLP systems that are deployed in production. We propose to investigate the extent to which generative LMs are sensitive to subtle linguistic framing effects that form the basis of linguistic theories of bias in written text. In particular, we test whether prompts that contain linguistic markers of author bias (e.g., hedges, implicatives, subjective intensifiers, assertives) result in measurable differences in models’ generated passages. We then propose several measures to identify linguistic markers in text that cause LMs to exacerbate this bias in their generations, as well methods that attempt to mitigate this.

  • EPFL: Robert West (opens in new tab) (PI), Maxime Peyrard (Postdoc), Martin Josifoski (PhD student)

    Microsoft: Saurabh Tiwary, Barun Patra, Ali Alvi, Vidhan Agarwal, Sarv Ghotra, Dean Carignan, Forough Poursabzi Sangdeh

    We propose to reduce the impact of spurious correlations within large language models by leveraging invariance learning principles. In particular, we will continue the training of existing language models according to the Invariant Risk Minimization (IRM) paradigm in order to enforce domain-invariant representations and achieve better out-of-domain generalization.

  • Georgia Tech: Munmun De Choudhury (opens in new tab) (PI), Alan Ritter (opens in new tab) (faculty), Duen “Polo” Horng Chau (opens in new tab) (faculty), Diyi Yang (opens in new tab) (faculty), Mai ElSherief (Postdoc), PhD students: Caleb Ziems, Ashutosh Baheti, Yang Chen, Jay Wang, Austin Wright (opens in new tab); Undergrads: Zhaoran Ma, Vincent Lieng, Omar Shaikh

    Microsoft: Saurbah Tiwary, Ali Alvi, Dean Carignan, Forough Poursabzi Sangdeh

    This project proposes a generic set of research thrusts for understanding and using large pretrained language models, specifically leveraging Microsoft’s Turing project language models to a) understand and quantify biases in diverse societal contexts, and b) propose methods to mitigate biases that get encoded in neural models built using naturalistically occurring behavioral trace data.

  • University of Washington: Yejin Choi (opens in new tab) (PI), Noah Smith (opens in new tab) (PI), Maarten Sap (opens in new tab) (PhD student), Anna Jafarpour (opens in new tab) (Postdoctoral fellow) and Elizabeth Clark (opens in new tab) (PhD student)

    Microsoft: Saurabh Tiwary, Pooya Moradi, Zhun Liu, Ali Alvi, Kriti Aggarwal, Dean Carignan, Forough Poursabzi Sangdeh

    Recent computing and deep learning advances have led to significant improvements in language modelling and text generation. However, these new language models (LM)are known to generate toxic and false information with human-like fluency, prohibiting their safe deployment. We propose two lines of research to address these issues, focusing on Microsoft’s Turing Natural Language Representation (TNLR) models. First, we will investigate methods for reducing toxic generations and for improving readers’ ability to distinguish between human- and machine-generated text. Second, we will look at how models distinguish between real and fake events and how to best introduce new knowledge through finetuning and conditioning.