IndicBART: A Pre-trained Model for Indic Natural Language Generation of Indic Languages
- Raj Dabre ,
- Himani Shrotriya ,
- Anoop Kunchukuttan ,
- Ratish Puduppully ,
- Mitesh M. Khapra ,
- Pratyush Kumar
ACL 2022 |
In this paper we present IndicBART, a multilingual, sequence-to-sequence pre-trained model focusing on 11 Indic languages and English. Different from existing pre-trained models, IndicBART utilizes the orthographic similarity between Indic scripts to improve transfer learning between similar Indic languages. We evaluate IndicBART on two NLG tasks: Neural Machine Translation (NMT) and extreme summarization. Our experiments on NMT for 12 language pairs and extreme summarization for 7 languages using multilingual fine-tuning show that IndicBART is competitive with or better than mBART50 despite containing significantly fewer parameters. Our analyses focus on identifying the impact of script unification (to Devanagari), corpora size as well as multilingualism on the final performance. The IndicBART model is available under the MIT license at this https URL (opens in new tab).