My Picture

Hi! I’m Alberto Muñoz-Ortiz, a last-year PhD student in Natural Language Processing at the LyS group at Universidade da Coruña in A Coruña, Spain (thesis defense expected for Q2 2026). My doctoral research is supervised by David Vilares and Carlos Gómez-Rodríguez. I am actively seeking Research Engineer and Applied Scientist roles where I can apply my expertise to real-world problems.

My research investigates linguistic structure from two complementary perspectives. On one hand, I use it to improve the performance, efficiency, and robustness of NLP systems, especially in low-resource or non-standardized language settings. This involves exploring novel representations, like linearizing complex tasks such as Nested NER into simple sequence labeling and using pixel-based visual models for transfer learning.

Alongside this, I apply these same linguistic structures as an analytical tool to interpret and understand language models. This includes probing their internal knowledge using syntactic dependencies and analyzing the distinct linguistic patterns found in text generated by large language models, comparing them to human patterns.

Previously, I was a visiting researcher at the MaiNLP Research Lab at LMU in Munich, Germany, hosted by Barbara Plank during the summer of 2023. Together with Barbara and Verena Blaschke, we explored the use of pixel-based models to transfer knowledge from Standard German to German non-standard varieties.

More recently, I visited the NLP Lab at EPFL in Lausanne, Switzerland, from October 2024 to March 2025. There, hosted by Antoine Bosselut and Gail Weiss, we explored the grokking phenomenon in autoregressive transformers and the importance of basic facts in training data for model generalization.