SIT at MixMT 2022: Fluent Translation Built on Giant Pre-trained Models

Abdul Rafae Khan, Hrishikesh Kanade, Girish Amar Budhrani, Preet Jhanglani, Jia Xu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper describes the Stevens Institute of Technology's submission for the WMT 2022 Shared Task: Code-mixed Machine Translation (MixMT). The task consisted of two subtasks, subtask 1 Hindi/English to Hinglish and subtask 2 Hinglish to English translation. Our findings lie in the improvements made through the use of large pre-trained multilingual NMT models and in-domain datasets, as well as back-translation and ensemble techniques. The translation output is automatically evaluated against the reference translations using ROUGE-L and WER. Our system achieves the 1st position on subtask 2 according to ROUGE-L, WER, and human evaluation, 1st position on subtask 1 according to WER and human evaluation, and 3rd position on subtask 1 with respect to ROUGE-L metric.

Original languageEnglish
Title of host publicationWMT 2022 - 7th Conference on Machine Translation, Proceedings of the Conference
Pages1136-1144
Number of pages9
ISBN (Electronic)9781959429296
StatePublished - 2022
Event7th Conference on Machine Translation, WMT 2022 - Abu Dhabi, United Arab Emirates
Duration: 7 Dec 20228 Dec 2022

Publication series

NameConference on Machine Translation - Proceedings
ISSN (Electronic)2768-0983

Conference

Conference7th Conference on Machine Translation, WMT 2022
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period7/12/228/12/22

Fingerprint

Dive into the research topics of 'SIT at MixMT 2022: Fluent Translation Built on Giant Pre-trained Models'. Together they form a unique fingerprint.

Cite this