Hi, I’m Alexandra!

I am a final year PhD student in the Center for Information and Language Processing at LMU Munich advised by Alex Fraser. My research is in natural language processing and machine learning. I am interested in self-supervised training for supervised (and unsupervised) machine translation and domain adaptation, mainly using monolingual or cross-lingual language models. My most recent research focuses on parameter-efficient methods for transfer learning.

Currently, I am an Applied Scientist Intern at Amazon Web Services (AWS) in Santa Clara, California, working with with the AI human language tecnhology group. In summer 2021 and spring 2022, I was a Research Intern at Allen Institute for AI, working with the AllenNLP team.

Before starting my graduate studies, I obtained a diploma on Electrical and Computer Engineering from the Nat. Tech. University of Athens (NTUA). My thesis advisor was Alex Potamianos.

News

September 2022: Invited talk at Lilt AI on our recent NAACL paper (slides).

August 2022: Happy to share that I started as an Applied Scientist intern at Amazon AI in Santa Clara, California! I will be working on speech translation with Prashant Mathur, Brian Thompson, Surafel M. Lakew and the rest of the Doubtfire team.

June 2022: Invited talk at the DG CNECT workshop on large language models on efficient multilingual machine translation (slides).

May 2022: Excited to share that I started another internship with Allen AI, working with Jesse Dodge and Matt Peters!

April 2022: Our paper on efficient domain adaptation of language models has been accepted to appear at NAACL 2022 as a long paper🥳. Take a look at the AI2 blog post about it here!

December 2021: Our pre-print on efficient domain adaptation of language models (done during my internship in Allen AI) is out.

July 2021: Started an internship in Allen AI, working with Jesse Dodge and Matt Peters!

March 2021: Our paper “Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Translation” has been accepted to appear in NAACL 2021!

September 2020: Our paper “Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT” has been accepted to appear in EMNLP 2020! Also, our work “Domain Adversarial Fine-Tuning as an Effective Regularizer” was accepted in the findings of EMNLP.

July 2020: Our system ranked first in the WMT 2020 Unsupervised Translation Shared Task. The task was to create a purely unsupervised machine translation model that translates between Upper Sorbian and German.

Selected Publications

  • Efficient Hierarchical Domain Adaptation for Pretrained Language Models
    Alexandra Chronopoulou, Matthew E. Peters and Jesse Dodge.
    NAACL 2022.
    [paper] [code] [blog]
  • Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Translation
    Alexandra Chronopoulou, Dario Stojanovski and Alexander Fraser.
    NAACL 2021.
    [paper] [code]
  • Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT
    Alexandra Chronopoulou, Dario Stojanovski and Alexander Fraser.
    EMNLP 2020.
    [paper] [code] [slides]
  • An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models
    Alexandra Chronopoulou, Christos Baziotis and Alexandros Potamianos.
    NAACL 2019.
    [paper] [code]