TL;DR: We propose a way to remove "concepts" from vision-language foundationmodels via a single layer single-step update, which is highly efficient and can be applied to large models. Our method can be used for various applications, such as removing harmful biases from models, correcting model errors, and protecting data privacy.
TL;DR: We propose a query-efficient approach for blackbox attacks against computer vision models. Spotlight: our proposed method can generate a single perturbation that can fool multiple blackbox detection and segmentation models simultaneously, demonstrate generalizability across different tasks.
Preprint
Inference-Time Text-to-Image Models Safety Steering with Plug-and-Play Classifier Guidance Yaoteng Tan, Zikui Cai, M. Salman Asif
To-be-appear(New)
TL;DR: We propose a highly effective method for ensuring safety in text-to-image generative models by integrating off-the-shelf vision-language foundation models, which are pre-trained to encode rich semantic information and can be utilized as a plug-in inspector for responsible text-to-image generations.
TL;DR: We explore the transform-dependent properties of adversarial examples and propose a method to generate perturbations that are effective under various image transformations. Through camera experiments, we demonstrate that such dynamical property persists even in the physical world, which can be used to design more robust adversarial attacks and defenses.
Academic Service
Conference reviewer:
2026: ICLR, CVPR, ECCV, ICIP, WACV, NeurIPS
2025: ICCV, ICIP, IEEE Asilomar
2024: WACV
Teaching Assistant:
UCR EE240 Pattern Recognition, 2023, 2024 Spring
UCR CS171/EE142 Intro. to Machine Learning, 2023 Fall, 2026 Winter