Shaden Alshammari • John R. Hershey • Axel Feldmann • William T. Freeman • Mark Hamilton
TL;DR: We introduce a single equation that unifies >20 machine learning methods into a periodic table. Our framework enables rapid prototyping of new ML algorithms with just a few lines of code.
Implement SimCLR in 5 lines:
from model import Model
from config import Config
from mappers import SimpleCNN
from distributions import Augmentation, Gaussian
config = Config(
mapper=SimpleCNN(output_dim=128, input_key="image", output_key="embedding"),
supervisory_distribution=Augmentation(input_key="index"),
learned_distribution=Gaussian(sigma=0.5, metric="cosine", input_key="embedding"),
lr=1e-3
)
model = Model(config)Every representation learning method answers three questions:
| Component | Purpose | Examples |
|---|---|---|
| Mapper | How to encode inputs? | CNN, ResNet, LookupTable |
| Supervisory Distribution | What relationships exist in input space? | Labels, Augmentations, k-NN, Gaussian |
| Learned Distribution | How to model embedding relationships? | Gaussian, Student-t, UniformCluster |
config = Config(
mapper=..., # Neural architecture
supervisory_distribution=..., # Input space relationships
learned_distribution=..., # Embedding space relationships
)Parametric t-SNE
Config(
mapper=SimpleCNN(output_dim=2),
supervisory_distribution=Gaussian(sigma=5, input_key="image"),
learned_distribution=StudentT(gamma=1, input_key="embedding")
)Supervised Contrastive Learning
Config(
mapper=SimpleCNN(output_dim=128, unit_sphere=True),
supervisory_distribution=Label(input_key="label"),
learned_distribution=Gaussian(sigma=0.4)
)Cross-Entropy with Learnable Classes
Config(
mapper=[SimpleCNN(output_dim=128), LookUpTable(num_embeddings=10)],
supervisory_distribution=Label(num_classes=10),
learned_distribution=Gaussian(sigma=0.5, metric="dot")
)Built-in Components: Distance-based kernels (Gaussian, StudentT), graph methods (UniformKNN, Label), clustering approaches, and neural architectures (SimpleCNN, ResNet, MLPMapper, LookUpTable). All components are easily extensible.
Visualization Suite: Automatic embedding plots, neighborhood distributions, cluster analysis, and probability visualizations with TensorBoard integration.
Training Example:
from visualization import PlotLogger, EmbeddingsPlot
plot_logger = PlotLogger([EmbeddingsPlot(), NeighborhoodDistPlot()])
trainer = pl.Trainer(callbacks=[plot_logger])
trainer.fit(model, train_loader, test_loader)View results: tensorboard --logdir=notebook_logs
git clone https://github.com/ShadeAlsha/ICon.git
cd ICon
pip install -r requirements.txtRequirements: PyTorch, PyTorch Lightning, matplotlib, plotly
| Before | After |
|---|---|
| Separate implementations for each method | Universal config pattern |
implement_tsne(), implement_simclr() |
Config(mapper=..., supervisory=..., learned=...) |
Switch methods by changing components:
# SimCLR → tSimCLR
learned_distribution=Gaussian(...) # SimCLR
supervisory_distribution=StudentT(...) # t-SimCLRI-Con Playground provides robust GPU support across different hardware configurations:
- CPU: Works on any machine (default fallback)
- CUDA GPUs: NVIDIA GPUs on local machines, Google Colab, or lab clusters
- Apple Silicon (MPS): M1/M2/M3 Macs with hardware acceleration
The playground automatically selects the best available device, but you can override:
# Automatic selection (CUDA > MPS > CPU)
python -m playground.playground_cli --device auto
# Force specific device
python -m playground.playground_cli --device cuda # Require CUDA
python -m playground.playground_cli --device mps # Require Apple Silicon
python -m playground.playground_cli --device cpu # Force CPUWhen training starts, you'll see a clear device indicator:
Using device: cuda
GPU: Tesla V100-SXM2-32GB
To confirm GPU usage on a cluster:
- Check the log shows "Using device: cuda"
- Run
nvidia-smiin another terminal - Look for python process using GPU memory
For Apple Silicon:
- Check the log shows "Using device: mps"
- Open Activity Monitor > Window > GPU History
- Look for python process using GPU
- CUDA: PyTorch with CUDA support (installation guide)
- MPS: macOS 12.3+ with Apple Silicon, PyTorch 1.12+
- CPU: Works out of the box
The playground handles pin_memory and other device-specific optimizations automatically.
- Complete Notebook - Many methods with visualizations
@inproceedings{alshammariunifying,
title={I-Con: A Unifying Framework for Representation Learning},
author={Alshammari, Shaden Naif and Hershey, John R and Feldmann, Axel and Freeman, William T and Hamilton, Mark},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}Questions? Contact Shaden Alshammari CLI Playground is built by Siddharth Manne.For more details, see this document