Runqi Lin


Runqi Lin

Runqi Lin

Postdoctoral Researcher

Oxford Internet Institute
University of Oxford

Address: Room 30.203, Stephen A. Schwarzman Centre for the Humanities, University of Oxford, OX2 6GG, UK.

E-mail: runqi.lin [at] oii.ox.ac.uk; runqi.lin0403 [at] gmail.com

[Google Scholar] [DBLP] [GitHub]


Biography

I am currently a Postdoctoral Researcher at Oxford Internet Institute, University of Oxford, hosted by Prof. Chris Russell. Prior to this, I completed my doctoral studies at the University of Sydney in January 2026, and was fortunate to be advised by Prof. Tongliang Liu. I was a visiting student at the University of Oxford, MBZUAI (Mohamed bin Zayed University of Artificial Intelligence), and Tsinghua University. My research interests lie in human-centred AI, with a particular emphasis on trustworthy and responsible machine learning.


Publications

      * indicates equal contribution;   † = indicates indicates corresponding author.

  • FORCE: Transferable Visual Jailbreaking Attacks via Feature Over-Reliance CorrEction.
    Runqi Lin, Alasdair Paren, Suqin Yuan, Muyang Li, Philip Torr, Adel Bibi, and Tongliang Liu.
    CVPR 2026. [PDF] [CODE]

  • Mobile-VTON: High-Fidelity On-Device Virtual Try-On.
    Zhenchen Wan*, Ce Chen*, Runqi Lin, Jiaxin Huang, Tianxi Chen, Yanwu Xu, Tongliang Liu, and Mingming Gong.
    CVPR 2026. [PDF] [CODE]

  • Understanding and Enhancing the Transferability of Jailbreaking Attacks.
    Runqi Lin, Bo Han, Fengwang Li, and Tongliang Liu.
    ICLR 2025. [PDF] [CODE]

  • Instance-dependent Early Stopping.
    Suqin Yuan, Runqi Lin, Lei Feng, Bo Han, and Tongliang Liu.
    ICLR 2025 (Spotlight, 5.1%). [PDF] [CODE]

  • Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency.
    Runqi Lin, Chaojian Yu, Bo Han, Hang Su, and Tongliang Liu.
    ICML 2024. [PDF] [CODE]

  • On the Over-Memorization During Natural, Robust and Catastrophic Overfitting.
    Runqi Lin, Chaojian Yu, Bo Han, and Tongliang Liu.
    ICLR 2024. [PDF] [CODE]

  • Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization.
    Runqi Lin, Chaojian Yu, and Tongliang Liu.
    NeurIPS 2023. [PDF] [CODE]


Research Interests

  • Fairness
    - Fairness in Generative Model.

  • Adversarial Robustness
    - Adversarial Attack & Training.
    - Jailbreaking Attack in Large Language Models.
    - Robustness in Vision-Language Models.

  • Generalization Capability
    - Catastrophic & Robust Overfitting.
    - Sharpness & Transferability.


Honors & Awards

  • USYD Faculty of Engineering Career Advancement Award.

  • UAI 2025 Top Reviewer.

  • ACM MM 2024 Outstanding Reviewer.

  • OpenAI Researcher Access Program.

  • ICML 2024 Financial Aid.

  • NeurIPS 2023 Scholar Award.

  • USYD Faculty of Engineering Research Scholarship.


Academic Services