Hi, I am Zheyuan (Frank) Liu (刘哲源), a third year CS PhD Candidate studying at the University of Notre Dame, advised by Prof. Meng Jiang and affiliated with the DM2 lab.
Before that, I received B.S in Computer Science and Applied Math at Brandeis University.
My research interest lies in GenAI (e.g. MLLM/LLM/) Safety, AI Privacy, Trustworthy AI, Agentic Safety, Multi-Agent Collaboration.etc. Currently, I am exploring Agentic Safety, Multi-agents collaboration and VLA models. For my CV, please refer CV.
Please feel free to drop me an Email for any form of communication or collaboration! I am also an active blog writer on Red Notes (小红书), feel free to find my account here !
We propose PRISM, a unified framework that enforces dual-space smoothness in representation and parameter spaces to improve robustness and balance unlearning metrics. PRISM consists of two smoothness optimization stages: (i) a representation space stage that employs a robustly trained probe to defend against jailbreak attacks, and (ii) a parameter-space stage that decouples retain-forget gradient conflicts, reduces imbalance, and smooths the parameter space to mitigate relearning attacks.
we propose ACE-GSL, an adaptive and context-rich graph self-supervised learning framework to address these issues from the perspectives of adaptivity, integrity, complementarity, and consistency.
We propose OpenDecoder, a new approach that leverages explicit evaluation of the retrieved information as quality indicator features for generation. We aim to build a RAG model that is more robust to varying levels of noisy context. Three types of explicit evaluation information are considered: relevance score, ranking score, and QPP (query performance prediction) score.
We propose Iterative Model Merging (IMM), a method that strategically combines weights from original and self-improved models to preserve generalization while incorporating genuine reasoning improvements.
We propose Modality Aware Neuron Unlearning (MANU), a novel unlearning framework for MLLMs designed to selectively clip neurons based on their relative importance to the targeted forget data, curated for different modalities. Specifically, MANU consists of two stages: important neuron selection and selective pruning. The first stage identifies and collects the most influential neurons across modalities relative to the targeted forget knowledge, while the second stage is dedicated to pruning those selected neurons.
We propose Selective Disentanglement Unlearning (SDU), a novel unlearning framework that selectively removes biased knowledge while preserving reasoning capabilities. SDU operates in three stages: identifying biased parameters using a shadow LLM, fine-tuning with unbiased data, and performing selective parameter updates based on weight saliency.
We introduce Multimodal Large Language Model Unlearning Benchmark (MLLMU-Bench), a novel benchmark aimed at advancing the understanding of multimodal machine unlearning. MLLMU-Bench consists of 500 fictitious profiles and 153 profiles for public celebrities, each profile feature over 14 customized question-answer pairs, evaluated from both multimodal (image+text) and unimodal (text) perspectives.
We propose Stable Sequential Unlearning (SSU), a novel framework designed to unlearn copyrighted content from LLMs over multiple time steps. Our approach works by identifying and removing specific weight updates in the model's parameters that correspond to copyrighted content. We improve unlearning efficacy by introducing random labeling loss and ensuring the model retains its general-purpose knowledge by adjusting targeted parameters.
We proposed Personalized Pieces (Per-Pcs) for personalizing large language models, where users can safely share and assemble personalized PEFT modules efficiently through collaborative efforts. Per-Pcs outperforms non-personalized and PEFT retrieval baselines, offering performance comparable to OPPU with significantly lower resource use, promoting safe sharing and making LLM personalization more efficient, effective, and widely accessible.
We proposed One PEFT Per User (OPPU) for personalizing large language models, where each user is equipped a personal PEFT module that can be plugged in base LLM to obtain their personal LLM. OPPU exhibits model ownership and enhanced generalization in capturing user behavior patterns compared to existing prompt-based LLM personalization methods.
we introduce Selective Knowledge negation Unlearning (SKU), a novel unlearning framework for LLMs, designed to eliminate harmful knowledge while preserving utility on normal prompts.
We develop Graph-Fairness Mixture of Experts (G-Fame), a novel plug-and-play method to assist any GNNs to learn distinguishable representations with unbiased attributes.
University of Notre Dame South Bend, IN, USA
2023.08 - present
Ph.D. in Computer Science and Engineering GPA: 3.92 / 4.00
Advisor: Prof. Meng Jiang
Brandeis University Waltham, MA, USA
2019.08 - 2023.05
B.S. in Computer Science B.S. in Applied Mathematics GPA: 3.87 / 4.00
Service
Journal Reviewer: IEEE Transactions on Big Data Reviewer (2023), TNNLS (2023), TKDE (2023, 2024)
Conference Reviewer: ICDM 2024 MLoG Workshop, ARR (2023-), CIKM Applied Research Track (2024), NeurIPS Dataset and Benchmark(2024)
Invited Talks
Jan 22th, 2025: PhooD Seminar @ Brandeis University
Miscellaneous
I've always been surrounded by wonderful friends, collaborators, and advisors, and I try to maintain an optimistic outlook. If you're having a tough time and would like someone to talk to, feel free to reach out!
I like sports, lifting and making new friends.
I like animals especially chinchilla, hamsters and kitties.