About

I am currently a Research Scientist at Google DeepMind in the GenAI team working on LLM post-training (Gemini). I am mainly working on (system) instruction following, in both agentic and non-agentic settings.

I earned my PhD in Computer Science from the University of Munich, where I was advised by Alex Fraser. My research was mostly on combining information from various languages and domains to enable positive transfer during parameter-efficient fine-tuning of language models, especially under resource constraints. During my PhD, I interned at Google DeepMind in Berlin. Prior to that, I interned (twice) at the Allen Institute for AI in the AllenNLP team. I also spent a few months at Amazon AI in Santa Clara, CA as an intern in the AI human language technology group.

Before my doctoral studies, I did my undergrad in Electrical & Computer Engineering at the National Technical University of Athens (NTUA).

News

March 2026: Will give a talk at the AI4Science Summer School back home in Athens, Greece on the 16th and 17th of July!

February 2026: I participated in a panel at Barnard College regarding: Deciding between “industry” (product) and “research” (academia), invited by Lauren Beltrone.

December 2025: I attended NeurIPS in San Diego and participated in the panel for the Model Merging tutorial.

October 2025: New paper out led by our intern Frederick Zhang on Do LLMs Really Need 10+ Thoughts for “Find the Time 1000 Days Later”? Towards Structural Understanding of LLM Overthinking.

July 2025: The technical report of our most advanced model, Gemini 2.5 pro, has just been published!

June 2025: The paper Model Merging of Large Language Models of our intern Prateek Yadav has been accepted to Transactions on Machine Learning Research (TMLR).

January 2025: I co-organized Repl4NLP 2025. The workshop was co-located with NAACL 2025 in Albuquerque, New Mexico.

Selected Publications

More

Contact

Feel free to reach out! You can email me at alexandra.xron@gmail.com.