We propose a highly efficient machine unlearning method for fundation models (e.g., CLIP, Stable Diffusion, VLMs) that requires only one-time gradient calculation and one-step update on one model layer that are selected based on introduced metrics, layer importance and gradient alignment. Our method provide a modular modification for post-training large models.
We propose a query-efficient approach for adversarial attacks on dense prediction models. Our proposed method can generate a single perturbation that can fool multiple blackbox detection and segmentation models simultaneously, demonstrate generalizability across different tasks.
Many properties of adversarial attacks are well-studied today (e.g., optimization, transferability, physical implementa-
tion, etc.). In this work, we explore an under-researched transform-dependent property of adversarial attacks, which
the optimization process of additive adversarial perturbations can be combined with various image transformations to
produce versatile, transform-dependent attack effects.
Service
Conference review:
WACV, ICCV, ICIP
Teaching Assistant:
UCR EE240 Pattern Recognition, 2023, 2024 Spring
UCR CS171/EE142 Intro. to Machine Learning, 2023 Fall