WMT20: lessons learned about machine translation quality improvement

By Jose de la Torre Vilariño
More info
WMT20: lessons learned about machine translation quality improvement

The Fifth conference on machine translation (WMT20), held virtually for the first time from November 19-20, 2020, was full of the latest information and cutting-edge innovations in the ever-changing field of machine learning and machine translation (MT). The conference featured lively chats and Q&A sessions, research papers and demos and shared tasks in several areas of machine translation, building on annual workshops and conferences that began nearly 15 years ago.

Through more than 800 presentations, the conference provided attendees with the chance to:

  • Evaluate the state of MT
  • Disseminate common test sets and public training data sets with published performance numbers
  • Refine evaluation and estimation methodologies for machine translation

We at Acclaro strive to remain at the forefront of the field, and we drafted some key takeaways from our participation in WMT20. We will use what we learned to innovate and improve our machine translation services — and we look forward to sharing those improvements with you.

Insights into key translation tasks

The WMT20 conference is famous for revealing state-of-the-art scientific improvements in a host of translation tasks. We gained special insight into a few areas that are particularly relevant to our work at Acclaro, including news translations and similar language translations.

News translation tasks

We were thrilled to have more than 153 submissions from the event to review, and we came away with many fascinating ideas to incorporate into our products and services.

There were many findings worth checking out, but we were particularly interested in exploring the presentations from DeepMind and Facebook as well as the session on Direct Assessment.

The Google-owned AI company and research laboratory, Deepmind, has a unique approach to sentence-level and document-level translation systems. Their system certainly had a number of state-of-the-art ideas, many of which we are incorporating into our projects already by leveraging the document-level translation approach and the iterative beam search.

We were deeply intrigued by Facebook AI’s idea to get stronger baseline translation models via iterative back-translation and self-training to stronger language models for noisy channel re-ranking. Facebook is always a great source of knowledge and practical ideas in the MT area. Their approach has already meant a major improvement in many of the main languages we work with, which is why we are incorporating this great system from WMT20 into our code repositories.

Finally, we were impressed by the amazing results using the method of Direct Assessment (DA) when assessing a large-scale manual evaluation.

The MT community continues to work in toolkits, including OpenNMTMarianSockeye and Fairseq, which reinforces our choice to use those same tools. We also learned more about how Transformer Architecture remains a disruptive force in the research community, while producing models that return great results.

Similar language translation task

On any MT task, English is generally one of the languages included as source or target. That explains why similar language translation (SLT), which uses languages that are different from each other but have similar dialects, is incredibly important to us as we seek to enrich our neural machine translation systems. The tasks included translations in the following languages: Spanish-Portuguese, Spanish-Catalan (Romance languages), Czech-Polish (Slavic languages) and many others.

The results of the task on SLT were varied across different language pairs. The best performing systems trained to translate between closely related language pairs, such as Catalan and Spanish for example, obtained significantly higher results in both directions than those trained to translate between other language pairs, including one of the two languages separately in terms of BLEU, RIBES and TER.

Understanding NLP models & simultaneous translation 

The conference included some workshops for our Acclaro engineers, who were immensely pleased to participate. In line with the latest industry trends, these workshops included neural language processing (NLP) modeling and simultaneous translation.

Interpreting predictions of NLP models

Although neural models in NLP are highly expressive and empirically successful, they also consistently fail. It can be a challenge to determine why models fail, especially because they tend to be opaque in their decision-making processes. Thus, several methods and interpretation techniques have sprung up to explain the predictions of the NLP models.

Among the methods are other ways of understanding the models, such as surveys or data set analyses. Additionally, we examined a comprehensive study of specific interpretations using salience maps, input disturbances (i.e., LIME and input reduction), adversary attacks and influence functions. We also analyzed the source code, which showed interpretations for a diverse set of NLP tasks.

Simultaneous translation: the holy grail of AI

Simultaneous translation, which performs the translation at the same time as the source speech is being created, has long been considered a holy grail in AI. While it could be useful in many settings, such as international conferences, negotiations, press releases, legal proceedings and medicine, the problem is one of the most difficult to solve.

Recently, with rapid improvements in machine translation, speech recognition and speech synthesis, there has been exciting progress toward simultaneous translation. We explored the design and evaluation of policies for simultaneous translation, and we left with a deep technical understanding of the history, recent advances and challenges that remain in this field.

Partner with cutting-edge machine translation services

It is always a thrill to gather with the brightest minds in MT. Even with the virtual format, this year was no exception.

We dove into the event with gusto, and came out brimming with ideas and enhancements we can make to improve our products and services. We also left feeling confident. Because while we thought we were on the leading edge of the translation services field, the conference confirmed it. If you’re looking for a translation partner that uses the latest MT systems, focuses on timely deliverables and provides quality customer service, contact Acclaro today.

Power your strategic growth

Go beyond tactical localization with tailored, strategic solutions that resonate locally and drive growth globally.

Get started