I have broad interests in Natural Language Processing (NLP), theoretical interpretation of large language models (LLMs), and trustworthy LLMs. I study the knowledge mechanisms of LLMs—how they acquire, store, represent, and utilize knowledge, and leverage these insights to enhance model reliability and performance. Especially, I focus on the following directions:
Interpreting, predicting, and preventing hallucination through the lens of knowledge interaction (e.g., knowledge overshadowing).
Updating knowledge in ways that preserve model robustness and reliability.
Improving knowledge acquisition mechanisms to enhance model intelligence.
Recent News
Invited Talks, Tutorials, Workshops, and Sevice
[Jan 2026] Invited Talk of "Knowledge is Power, But Power Casts Shadows?" at University of Edinburgh
[Jan 2026] Invited Talk of "Knowledge is Power, But Power Casts Shadows?" at University of Massachusetts
[Nov 2025] Invited Talk of "Developing Robust and Trustworthy Foundation Models" at NICE Academic Platform
[Oct 2025] Invited Talk of "Developing Robust and Trustworthy Language Models" at Northeastern University
[Aug 2025] Organized a Workshop on "Towards Knowledgeable Foundation Models" at ACL 2025
[Aug 2025] Session Chair for "Language Models and Interpretability" at ACL 2025
[Aug 2025] Session Chair for "Language Modeling" at ACL 2025
[Apr 2025] Invited Talk of "The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination" at Ploutos
[Apr 2025] Invited Talk of "The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination" at Chinese Academy of Sciences
[Mar 2025] Invited Talk of "The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination" at University of Texas at Austin
[Feb 2025] Organized a Tutorial on "The Lifecycle of Knowledge in Large Language Models: Memorization, Editing, and Beyond" at AAAI 2025
[Aug 2024] Invited Talk of "Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models" at Beijing Academy of Artificial Intelligence
Publications
(see full list in Google Scholar. "†" for corresponding authors.)
Atomic Reasoning for Scientific Table Claim Verification