Connecting scientists
with an AI safety use-inspiration
We are a non-profit research organization aiming to build scientific foundations within AI safety to ensure robust, scalable alignment
73
alumni placed in government, AI safety labs, and universities
Our alumni work as independent researchers and in:
We are funded by: Open Philanthropy, Survival and Flourishing Fund, AI Safety Tactical Opportunities Fund, Cooperative AI Foundation
Summer Research Fellowship
3-month interdisciplinary program pairing fellows from complex systems sciences with AI safety mentors. ~20 fellows work on projects at the intersection of their field and AI safety.
Affiliate Program
6-12 month tailored support for established researchers pursuing PIBBSS-style AI Risk research. $6,000-10,000/month stipend plus research community access and strategic support.
Research Residency
6-week applied mathematics program with Iliad in London. For PhD and postdoctoral researchers in math, physics, CS or related fields. $6,000-15,000 stipend based on experience.
Bridging Complex Systems and AI Safety
Principles of Intelligence facilitates research that draws on parallels between intelligent behavior in natural and artificial systems. We believe insights from ecology, neuroscience, economics, physics, and other sciences studying complex systems can inform the development of safe and beneficial AI.
Our programs support researchers exploring questions like: How do multi-agent dynamics in biological systems inform AI governance? What can developmental biology teach us about alignment? How do legal frameworks apply to emerging AI risks? What principles govern intelligence across different substrates and scales?
Our Community
Principles of Intelligence brings together researchers, mentors, and advisors from diverse scientific backgrounds

Alexander Gietelink Oldenziel

Jan Kulveit

Patrick Butlin

Tan Zhi-Xuan

David A. Dalrymple





