I am a research scientist interested in understanding the fundamental properties of deep learning systems in order to make them more reliable, robust, efficient and broadly beneficial for society. Currently, I am at Google DeepMind working on multimedia provenance and watermarking as part of the SynthID effort.
Prior to that, I completed my PhD with the Autonomous Intelligent Machines and Systems CDT at the University of Oxford where I was supervised by Philip Torr and Adel Bibi. I was also a research intern at Motional, Adobe, DeepMind and Meta. My thesis established the universal in-context approximation capabilities of sequence models.
Before coming to Oxford, I got my MSc at ETH Zürich focusing on robotics, machine learning, statistics, and applied category theory. My thesis was on Compositional Computational Systems. At ETH, I was working closely with Prof. Emilio Frazzoli's group and my studies were generously funded by the Excellence Scholarship & Opportunity Programme (ESOP).
Full list on Google Scholar.
We Can Hide More Bits: The Unused Watermarking Capacity in Theory and in Practice
Aleksandar Petrov, Pierre Fernandez, Tomáš Souček, Hady Elsahar
Long Context In-Context Compression by Getting to the Gist of Gisting
Aleksandar Petrov, Mark Sandler, Andrey Zhmoginov, Nolan Miller, Max Vladymyrov
On the Coexistence and Ensembling of Watermarks
Aleksandar Petrov, Shruti Agarwal, Philip H.S. Torr, Adel Bibi, John Collomosse
Conference on Neural Information Processing Systems (NeurIPS) 2025
Universal In-Context Approximation By Prompting Fully Recurrent Models
Aleksandar Petrov, Tom A. Lamb, Alasdair Paren, Philip H.S. Torr, Adel Bibi
Conference on Neural Information Processing Systems (NeurIPS) 2024
Risks and Opportunities of Open-Source Generative AI
Francisco Eiras, Aleksandar Petrov, Bertie Vidgen, Christian Schroeder, Fabio Pizzati, Katherine Elkins, Supratik Mukhopadhyay, Adel Bibi, Aaron Purewal, Csaba Botos, Fabro Steibel, Fazel Keshtkar, Fazl Barez, Genevieve Smith, Gianluca Guadagni, Jon Chun, Jordi Cabot, Joseph Imperial, Juan Arturo Nolazco, Lori Landay, Matthew Jackson, Phillip H.S. Torr, Trevor Darrell, Yong Lee, Jakob Foerster
International Conference on Machine Learning (ICML) 2024
Prompting a Pretrained Transformer Can Be a Universal Approximator
Aleksandar Petrov, Philip H.S. Torr, Adel Bibi
International Conference on Machine Learning (ICML) 2024
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations
Aleksandar Petrov, Philip H.S. Torr, Adel Bibi
International Conference on Learning Representations (ICLR) 2024
Language Model Tokenizers Introduce Unfairness Between Languages
Aleksandar Petrov, Emanuele La Malfa, Philip H.S. Torr, Adel Bibi
Conference on Neural Information Processing Systems (NeurIPS) 2023
Certifying Ensembles: A General Certification Theory with S-Lipschitzness
Aleksandar Petrov*, Francisco Eiras, Amartya Sanyal, Philip H.S. Torr, Adel Bibi*
International Conference on Machine Learning (ICML) 2023
HiddenGems: Efficient safety boundary detection with active learning
Aleksandar Petrov, Carter Fang, Khang Minh Pham, You Hong Eng, James Guo Ming Fu, Scott Drew Pendleton
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
Compositional Computational Systems
Aleksandar Petrov, supervised by Gioele Zardini, Andrea Censi, Emilio Frazzoli
Master thesis
Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents
Jacopo Tani, Andrea F. Daniele, Gianmarco Bernasconi, Amaury Camus, Aleksandar Petrov, Anthony Courchesne, Bhairav Mehta, Rohit Suri, Tomasz Zaluska, Matthew R. Walter, Emilio Frazzoli, Liam Paull, Andrea Censi
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
Learning Camera Miscalibration Detection
Andrei Cramariuc*, Aleksandar Petrov*, Rohit Suri, Mayank Mittal, Roland Siegwart, Cesar Cadena
IEEE International Conference on Robotics and Automation (ICRA) 2020