Qinyuan Wu
qwu [at] mpi-sws [dot] org
Campus E1 5
66125, Saarbruecken, Germany
I am a third-year PhD student at the CS@Max Planck and the Max Planck Institute for Software Systems (MPI-SWS), advised by Krishna Gummadi. I am also fortunate to closely collaborate with and receive guidance from Evimaria Terzi (Boston University), Mariya Toneva (MPI-SWS), and Muhammad Bilal Zafar (Ruhr University Bochum) (Odered by last name alphabet). Before I joined MPI-SWS, I got my bachelor degree in mathematics-physics from University of Electronic Science and Technology of China (UESTC).
I investigate how large language models (LLMs) internalize, represent, and utilize knowledge—seeking to enhance their reliability, interpretability, and safety. My work centers on understanding the interplay between internal learning (from training) and external adaptation (via prompts, retrieval, or tool use).
Ultimately, I aim to understand and improve the loop between how LLMs learn, remember, refer, and act—toward more trustworthy and cognitively grounded AI systems.
Beyond core research, I collaborate on:
-
Privacy and security in LLMs – balancing data protection with model utility and efficiency.
-
Neuroscience-inspired modeling – linking human memory mechanisms to LLM cognition.
-
LLM systems and optimization – exploring how PEFT, quantization, and inference techniques affect learning and behavior.
news
| Jan 26, 2026 | Our paper ‘Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs’ and ‘In Agents We Trust, but Who Do Agents Trust? Latent Source Preferences Steer LLM Generations’ are accepted in ICLR 2026, the camera-ready version is coming out soon! These papers will also be presented in this year’s IASEAI annual conference! |
|---|---|
| Jan 20, 2026 | Our paper ‘The Algorithmic Self-Portrait: Deconstructing Memory in ChatGPT’ is accepted in The Web Conference 2026, the camera-ready and arxiv version is coming out soon! This paper will also be presented in this year’s IASEAI annual conference! |
| Nov 04, 2025 | I’ll attend EMNLP 2025 happening in Suzhou, come and chat! |
| Sep 08, 2025 | Serve as TA for a new seminar course on LLM training in University of Saarland, check the course page: Efficient Training of Large Language Models: From Basics to Fine-Tuning. |
| Jul 29, 2025 | Our new paper Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs is on ArXiv now: ArXiv. |
latest posts
selected publications
- WWW 2026Algorithmic Self-Portrait:Deconstructing Memory in ChatGPTACM Web Conference 2026, Dubai, UAE, 2026Present in The International Association for Safe & Ethical AI second annual conference (IASEAIʼ26), non-archival
- ICLR 2026Rote Learning Considered Useful: Generalizing over Memorized Data in LLMsThe Fourteenth International Conference on Learning Representations, ICLR 2026, April 23-27, 2026, Rio de Janeiro, Brazil, 2026Present in The International Association for Safe & Ethical AI second annual conference (IASEAIʼ26), non-archival