Qinyuan Wu

qinyuan.JPG

qwu [at] mpi-sws [dot] org

Campus E1 5

66125, Saarbruecken, Germany

I am a third-year PhD student at the CS@Max Planck and the Max Planck Institute for Software Systems (MPI-SWS), advised by Krishna Gummadi. I am also fortunate to closely collaborate with and receive guidance from Evimaria Terzi (Boston University), Mariya Toneva (MPI-SWS), and Muhammad Bilal Zafar (Ruhr University Bochum) (Odered by last name alphabet). Before I joined MPI-SWS, I got my bachelor degree in mathematics-physics from University of Electronic Science and Technology of China (UESTC).

I investigate how large language models (LLMs) internalize, represent, and utilize knowledge—seeking to enhance their reliability, interpretability, and safety. My work centers on understanding the interplay between internal learning (from training) and external adaptation (via prompts, retrieval, or tool use).

Ultimately, I aim to understand and improve the loop between how LLMs learn, remember, refer, and act—toward more trustworthy and cognitively grounded AI systems.

Beyond core research, I collaborate on:

  1. Privacy and security in LLMs – balancing data protection with model utility and efficiency.

  2. Neuroscience-inspired modeling – linking human memory mechanisms to LLM cognition.

  3. LLM systems and optimization – exploring how PEFT, quantization, and inference techniques affect learning and behavior.


news

Jan 26, 2026 Our paper ‘Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs’ and ‘In Agents We Trust, but Who Do Agents Trust? Latent Source Preferences Steer LLM Generations’ are accepted in ICLR 2026, the camera-ready version is coming out soon! These papers will also be presented in this year’s IASEAI annual conference!
Jan 20, 2026 Our paper ‘The Algorithmic Self-Portrait: Deconstructing Memory in ChatGPT’ is accepted in The Web Conference 2026, the camera-ready and arxiv version is coming out soon! This paper will also be presented in this year’s IASEAI annual conference!
Nov 04, 2025 I’ll attend EMNLP 2025 happening in Suzhou, come and chat!
Sep 08, 2025 Serve as TA for a new seminar course on LLM training in University of Saarland, check the course page: Efficient Training of Large Language Models: From Basics to Fine-Tuning.
Jul 29, 2025 Our new paper Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs is on ArXiv now: ArXiv.

latest posts

selected publications

  1. WWW 2026
    Algorithmic Self-Portrait:Deconstructing Memory in ChatGPT
    Abhisek Dash*, Soumi Das*, Elisabeth Kirsten*Qinyuan Wu*, Sai Keerthana Karnam, Krishna P. Gummadi, Thorsten Holz, Muhammad Bilal Zafar, and Savvas Zannettou
    ACM Web Conference 2026, Dubai, UAE, 2026
    Present in The International Association for Safe & Ethical AI second annual conference (IASEAIʼ26), non-archival
  2. ICLR 2026
    Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs
    Qinyuan Wu, Soumi Das, Mahsa Amani, Bishwamittra Ghosh, Mohammad Aflah Khan, Krishna P. Gummadi, and Muhammad Bilal Zafar
    The Fourteenth International Conference on Learning Representations, ICLR 2026, April 23-27, 2026, Rio de Janeiro, Brazil, 2026
    Present in The International Association for Safe & Ethical AI second annual conference (IASEAIʼ26), non-archival
  3. WSDM 2025
    Towards Reliable Latent Knowledge Estimation in LLMs: Zero-Prompt Many-Shot Based Factual Knowledge Extraction
    Qinyuan Wu, Mohammad Aflah Khan, Soumi Das, Vedant Nanda, Bishwamittra Ghosh, Camila Kolling, Till Speicher, Laurent Bindschaedler, Krishna P Gummadi, and Evimaria Terzi
    The 18th ACM International Conference on Web Search and Data Mining (WSDM 2025), Hannover, Germany, 2025