Yue Zhao
Avatar of Yue Zhao
Assistant Professor
Thomas Lord Department of Computer Science
School of Advanced Computing

University of Southern California

Los Angeles, CA, USA
Email:

External Collaboration and Employment. I am open to external opportunities for invited talks, research collaborations, and employment (only on the part-time/advising/visiting basis). Let us connect by email. I frequently visit major cities, e.g., Seattle, NYC, Chicago, Boston, Atlanta, and Bay Area to meet people, give talks, and host social events.

Lab Openings. We are warmly welcoming new members to the FORTIS Lab!

Hiring Ph.D. Students with stringent criteria:
  • Future Ph.D. students should be comparable to our 1st year Ph.D. students -- see FORTIS Lab.
  • For Fall 2026, due to funding reasons, my lab will rely on fellowship offers but no RA offers (which require an impossible amount of funding). Thus, this year we can only recommend candidates for fellowship but the fellowship committee will make the final decision. This requires applicants to have high GPAs (3.7+) and reasonably good English test scores (TOEFL 100+) for committee review.
  • Also check other labs with openings. Good luck!
Research Collaborators/Interns (Any Time, All Year Round):
  • We welcome both undergraduate and graduate interns from USC and other institutions.
  • We will provide GPUs/API keys for the project.
  • Preferred candidates are located in North America for time zone compatibility.
  • I do not hire in-person summer interns -- I am enjoying summer and working remotely :)
Application Process: To apply for either opportunities, complete the Application Form, email [email protected] after submitting the form, and review the FORTIS Lab website for more information before reaching out.


Research Interests: My research centers on building reliable, safe, and scalable AI. I organize my work into two tightly connected tiers: (1) advancing the scientific foundations of reliability and safety in modern AI systems, and (2) developing scalable infrastructure and applications that translate these foundations into real-world impact.

  1. Tier 1: Foundations of Reliable & Safe AI
    I investigate the principles that make AI models robust, predictable, and aligned under distribution shift, uncertainty, and adversarial pressures. This tier comprises two complementary directions:
    • Reliable Models (OOD & Anomaly Detection): Creating algorithms and benchmarks that detect rare, unseen, or abnormal patterns across modalities.
    • Safety of LLMs & Agents: Understanding and mitigating failure modes in large language models and multi-agent systems, including hallucinations, jailbreaks, privacy leakage, and model extraction.
    Keywords: LLM Safety, Hallucination Mitigation, Jailbreak Detection, OOD Generalization, Anomaly Detection, Multi-agent Reliability, Privacy & Model Extraction Risks, Robust Reasoning
  2. Tier 2: Scalable Open Systems & Scientific/Societal Impact
    I design open, scalable platforms and apply reliable AI methods to high-impact scientific and societal problems. This tier focuses on two areas that operationalize foundational advances:
    • ML Systems (Infrastructure): Building efficient, reproducible, and open-source systems for large-scale model training, evaluation, deployment, and workflow automation.
    • AI for Science & Society (Applications): Using foundation models to advance climate and weather forecasting, healthcare and biomedicine, and political or social decision-making.
    Keywords: Open-source ML Systems, Scalable AI Infrastructure, Automated ML Workflows, Evaluation & Benchmarking, AI for Science, Scientific Foundation Models, Climate & Weather Modeling, AI for Healthcare & Biomedicine

Biography.


✈ News and Travel

[Dec 2025] Our entire group is at NeurIPS 2025, in San Diego! Please reach out to our Ph.D. students for collaborating opportunities and internships!

[Nov 2025] 🎉Our work on explainability–extractability tradeoffs in MLaaS wins the Second Prize CCC Award at the IEEE ICDM 2025 BlueSky Track!.

[Nov 2025] Our paper on mitigating hallucinations in LLMs using causal reasoning has been accepted to AAAI 2026! See our Preprint.

[Nov 2025] 🎉LLM-augmented transformers (TyphoFormer) for typhoon forecasting wins the Best Short Paper Award at ACM SIGSPATIAL 2025; see our Preprint!

[Oct 2025] Two new papers accepted to IJCNLP-AACL 2025 FindingsAD-AGENT: A Multi-agent Framework for End-to-end Anomaly Detection and LLM-Empowered Patient-Provider Communication (a data-centric survey on clinical applications of LLMs). Congratulations to all!

[Oct 2025] 🎉Congratulations to our Ph.D. students Yuehan Qin and Haoyan Xu for successfully passing their qualifying exams! Both of them achieved this after 1.5 years transferring to our group. We are so proud of their accomplishments and excited for their continued research journeys and graduation!

[Sep 2025] 🎉Congratulations to Shawn Li for being selected as an Amazon ML Fellow (2025–2026). The fellowship recognizes his strong research achievements as a PhD student and will further accelerate his work in secure and trustworthy machine learning.

[Sep 2025] New collaborative NeurIPS 2025 paper “DyFlow” proposes a dynamic workflow framework for agentic reasoning with LLMs.

[Aug 2025] We have two new papers accepted to EMNLP Findings 2025: one on causal methods for hallucination mitigation (Treble Counterfactual VLMs) and another introducing a benchmark for NLP anomaly detection (NLP-ADBench). See our Treble Preprint and NLP-ADBench Preprint!

🏅 Awards and Grants

As Principal Investigator (August 2023 onwards)
Prior to Principal Investigator Role (Before August 2023)