This repository contains the official implementation of UCAN: Towards Strong Certified Defense with Asymmetric Randomization, providing code for reproducible certified adversarial robustness experiments.
Abstract: This work presents UCAN, a unified framework for customizing anisotropic noise in randomized smoothing to achieve stronger certified adversarial robustness. We propose three novel Noise Parameter Generators (NPGs) with different optimality levels and provide theoretical guarantees for anisotropic randomized smoothing.
Key Contributions:
- Universal theory for anisotropic randomized smoothing based on linear transformations
- Three NPG methods with different optimality-efficiency trade-offs
- Certification-wise approach ensuring soundness without memory overhead
- Significant improvements in certified accuracy across multiple datasets
# Clone the repository
git clone [ANONYMOUS_REPO_URL]
cd UCAN
# Create conda environment
conda env create -f environment.yml
conda activate ucan# Clone the repository
git clone [ANONYMOUS_REPO_URL]
cd UCAN
# Install dependencies
pip install -r requirements.txt# Train a certification-wise model on CIFAR-10
python train_certification_noise.py cifar10 cifar_resnet110 ./model_saved/ \
--method="PersNoise_isoR" --lr=0.01 --batch=100 --sigma=1.0 \
--epochs=200 --gpu="0" --noise_name="Gaussian"
# Certify the test set
python certification_certification_noise.py cifar10 cifar_resnet110 \
--method="PersNoise_isoR" --batch=1000 --sigma=1.0 --gpu="0" \
--norm=2 --noise_name="Gaussian"UCAN/
βββ README.md # This file
βββ requirements.txt # Python dependencies
βββ environment.yml # Conda environment
βββ examples/ # Example scripts and notebooks
β βββ quick_start.py # Minimal working example
β βββ demo.ipynb # Interactive demo
βββ archs/ # Neural network architectures
β βββ cifar_resnet.py # ResNet for CIFAR
βββ utils/ # Utility functions
β βββ model_prepare.py # Model preparation utilities
β βββ plot_examples.py # Visualization utilities
β βββ plot_runtime.py # Runtime analysis
βββ model_saved/ # Pre-trained models directory
βββ results/ # Experimental results
βββ Core Implementation Files:
βββ architectures.py # NPG architectures
βββ noisegenerator.py # Noise parameter generators
βββ noises.py # Noise distribution definitions
βββ datasets.py # Dataset loading and preprocessing
βββ core.py # Core UCAN certification
βββ core_baseline.py # Baseline certification (Cohen et al.)
βββ Training & Certification Scripts:
βββ train_*.py # Training scripts for each NPG method
βββ certification_*.py # Certification scripts for each method
-
Pattern-wise Anisotropic Noise (Low optimality)
- Fixed hand-crafted spatial patterns
- No training required, inference-free
- Basic but computationally efficient
-
Dataset-wise Anisotropic Noise (Moderate optimality)
- Learned parameters optimized for entire dataset
- Pre-training required, one-time inference
- Balanced performance-efficiency trade-off
-
Certification-wise Anisotropic Noise (High optimality)
- Input-specific parameter optimization
- Per-input inference required
- Maximum adaptation capability
- Datasets: MNIST, CIFAR-10, ImageNet
- Architectures: ResNet (various depths), CNN architectures
- Threat Models: ββ, ββ, ββ perturbations
python train_certification_noise.py cifar10 cifar_resnet110 ./model_saved/ \
--method="PersNoise_isoR" \
--lr=0.01 \
--batch=100 \
--sigma=1.0 \
--epochs=200 \
--workers=16 \
--lr_step_size=50 \
--gpu="0" \
--noise_name="Gaussian" \
--IsoMeasure=Truepython train_dataset_noise.py cifar10 cifar_resnet110 ./model_saved/ \
--method="UniversalNoise" \
--lr=0.01 \
--batch=100 \
--sigma=1.0 \
--epochs=200 \
--gpu="0"python train_pattern_noise.py cifar10 cifar_resnet110 ./model_saved/ \
--method="PreassignedNoise" \
--pattern_type="center_focus" \
--lr=0.01 \
--batch=100 \
--epochs=200 \
--gpu="0"python certification_certification_noise.py cifar10 cifar_resnet110 \
--method="PersNoise_isoR" \
--batch=1000 \
--sigma=1.0 \
--workers=16 \
--gpu="0" \
--norm=2 \
--noise_name="Gaussian" \
--IsoMeasure=Truepython certification_baseline.py cifar10 cifar_resnet110 \
--sigma=1.0 \
--batch=1000 \
--gpu="0" \
--norm=2Our method achieves significant improvements in certified accuracy:
- MNIST: Up to 142.5% improvement over best baseline
- CIFAR-10: Up to 182.6% improvement over best baseline
- ImageNet: Up to 121.1% improvement over best baseline
To reproduce paper results:
# Download pre-trained models (if available)
# Run full experimental pipeline
bash scripts/reproduce_paper_results.sh- Linear Transformation Theory: Direct mapping between isotropic and anisotropic noise
- Soundness Guarantees: Certification-wise approach avoids memory-based certification
- Universal Framework: Works with any existing randomized smoothing method
- No Memory Overhead: Unlike ANCER/RANCER, no parameter caching required
- Flexible Trade-offs: Choose NPG method based on efficiency requirements
- Strong Performance: Consistent improvements across datasets and threat models
from noises import GaussianNoise
from noisegenerator import NoiseGenerator
# Create custom pattern-wise noise
custom_pattern = lambda x, y: 0.1 + 0.9 * (x**2 + y**2) / (32**2)
noise_gen = NoiseGenerator(pattern=custom_pattern)# Use multiple GPUs
python train_certification_noise.py cifar10 cifar_resnet110 ./model_saved/ \
--gpu="0,1,2,3" \
--batch=400 # Scale batch size accordinglyWe welcome contributions! Please see our contributing guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
If you use this code in your research, please cite our paper:
@article{anonymous2024ucan,
title={UCAN: Towards Strong Certified Defense with Asymmetric Randomization},
author={Anonymous Authors},
journal={Under Review},
year={2024}
}- Cohen et al. - Certified Adversarial Robustness via Randomized Smoothing
- ANCER - Anisotropic Certified Robustness
- RANCER - Randomized Anisotropic Noise
For questions about the code or paper, please:
- Open an issue on GitHub
- Contact: Anonymous submission - contact information will be provided upon acceptance
- Built on top of the certified robustness framework by Cohen et al.
- Neural network architectures adapted from pytorch-classification
- Thanks to the randomized smoothing community for foundational work
Note: This implementation is provided for research purposes. For production use, additional testing and validation may be required.