StealthAttack proposes a novel density-guided poisoning method for 3D Gaussian Splatting (3DGS) that strategically injects illusory objects into low-density regions to create viewpoint-dependent visual illusions. The method combines point cloud poisoning with adaptive noise scheduling to disrupt multi-view consistency, enabling successful attacks where illusions are clearly visible from target views while maintaining high fidelity in innocent viewpoints.
Developed a reference-guided 3D inpainting approach utilizing SDEdit on aligned Gaussian initialization, and created a 360° inpainting dataset (360-USID) for comprehensive evaluation.
This paper proposes a novel defense framework called FDDF (Features Distraction Defense Framework) to mitigate trigger backdoor attacks in federated learning-based intrusion detection systems by identifying and eliminating the most significant features that may contain triggers, without interfering with the model training process.
We propose the Knowledge Distillation Defense Framework (KDDF) to detect and remove features of the potential triggers during the inference. KDDF utilizes Knowledge Distillation (KD) to train a validation model on each IoT device, which is used to identify suspicious data.
Evaluating the effectiveness of various methods (Restormer, fine-tuned diffusion model, and Vision Mamba) in mitigating simulated image artifacts to enhance NeRF and 3DGS performance in novel-view synthesis.
Honors
•Bronze Award of The 2022 ICPC Asia Taoyuan Regional Programming Contest
•Bronze Award of The 2023 ICPC Asia Taoyuan Regional Programming Contest
•Silver Award of The 2023 ICPC Asia Taiwan Online Programming Contest
•President's Award in 2023 Spring Semester (Top 1% in the class)
•College Student Research Scholarship, National Science and Technology Council, Taiwan
(collaborate with Bo-Zhong Chen, 2023)
Stolen from Jon Barron's website.
Last updated Sep 2024.