Loading...
Microsoft Research Focus 08 edition, week of January 23
Microsoft Research Blog

Research Focus: Week of January 23, 2023 

January 25, 2023

Welcome to Research Focus, a new series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft. Organizations must digitize various documents, many with charts and images, to manage…

Microsoft Research Focus 03: Week of November 7th, 2022
Microsoft Research Blog

Research Focus: Week of November 7, 2022 

November 8, 2022

Welcome to Research Focus, a new series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft. Barun Patra, Saksham Singhal, Shaohan Huang, Zewen Chi, Li Dong, Furu Wei,…

In the news | ZDNet

Microsoft improves Translator and Azure AI services with new AI ‘Z-code’ models 

March 22, 2022

Microsoft is updating its Translator and other Azure AI services with a set of AI models called Z-code, officials announced on March 22. These updates will improve the quality of machine translations, as well as help these services support more…

Collage of four images. 1) a VR haptic pivot device 2) Ashley Lorens of Microsoft Research 3) an image of tractor on a farm 4) image of Race and Technology lecture series speakers.
Microsoft Research Blog

Research at Microsoft 2021: Collaborating for real-world change 

December 15, 2021

Over the past 30 years, Microsoft Research has undergone a shift in how it approaches innovation, broadening its mission to include not only advancing the state of computing but also using technology to tackle some of the world’s most pressing…

Z-Code multilingual model

In the news | Microsoft Translator Blog

Multilingual translation at scale: 10000 language pairs and beyond 

November 22, 2021

Microsoft is on a quest for AI at Scale with high ambition to enable the next generation of AI experiences. The Microsoft Translator ZCode team is working together with Microsoft Project Turing and Microsoft Research Asia to advance language and…

An illustration of how the image text contrastive and translation text contrastive tasks work together to help align the space of images, English text and non-English text. On the left side of the illustration, the three domains—Image Domain, English Domain, and Non-English Domain--are segregated. An arrow labeled “Image-Captions training data” points to another depiction of the three domains where the image domain and the English domain intersect but the non-English domain is still separate and shown in gray to show that it’s not significantly affected. A two headed arrow with the label “Image-Text contrastive loss” is drawn between the image and English domains. Towards the bottom of the image, an arrow labeled “Parallel corpus training data” points to another depiction of the three domains where the English domain and the non-English domain intersect but the image domain is separate and shown in gray to indicate that it is not significantly affected. A two-headed arrow with the label “Translated Text Contrastive loss” is drawn between the English and non-English domains. Finally, a third arrow with the label “Resulting Effect” is drawn to the right of the image which points to a depiction of all three domains intersecting.
Microsoft Research Blog

Turing Bletchley: A Universal Image Language Representation model by Microsoft 

November 1, 2021 | Saurabh Tiwary

Today, the Microsoft Turing team (opens in new tab) is thrilled to introduce Turing Bletchley, a 2.5-billion parameter Universal Image Language Representation model (T-UILR) that can perform image-language tasks in 94 languages. T-Bletchley has an image encoder and a universal language encoder that vectorize…

Figure 1. Trend of sizes of state-of-the-art NLP models over time
Microsoft Research Blog

Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, the World’s Largest and Most Powerful Generative Language Model 

October 11, 2021 | Ali Alvi and Paresh Kharya

We are excited to introduce the DeepSpeed- and Megatron-powered Megatron-Turing Natural Language Generation model (MT-NLG), the largest and the most powerful monolithic transformer language model trained to date, with 530 billion parameters. It is the result of a research collaboration…

XTREME leaderboard showing T-ULRv5 at the top.
Microsoft Research Blog

Microsoft Turing Universal Language Representation model, T-ULRv5, tops XTREME leaderboard and trains 100x faster 

September 28, 2021 | Saurabh Tiwary and Lidong Zhou

Today, we are excited to announce that with our latest Turing universal language representation model (T-ULRv5), a Microsoft-created model is once again the state of the art and at the top of the Google XTREME public leaderboard (opens in new…

a woman sitting at a table with a plate of food
Microsoft Research Blog

Learning web search intent representations from massive web search logs 

July 21, 2019 | Paul Bennett, Chenyan Xiong, and Corby Rosset

Have you ever wondered what happens when you ask a search engine to search for something as seemingly simple as “how do you grill salmon”? Have you found yourself entering multiple searches before arriving at a webpage with a satisfying…

  • Previous
  • 1
  • 2
  • Next