Seldon Lab’s cover photo
Seldon Lab

Seldon Lab

Venture Capital and Private Equity Principals

Building the future of AI security

About us

Building technology for an AGI secure world.

Website
www.seldonlab.com
Industry
Venture Capital and Private Equity Principals
Company size
2-10 employees
Headquarters
San Francisco
Type
Privately Held

Locations

Employees at Seldon Lab

Updates

  • Seldon Lab reposted this

    WeaveMind just went from 0 to 200 Stars on GitHub 𝐢𝐧 3 𝐡𝐨𝐮𝐫𝐬. Quentin spent the last weeks building a new programming language for AI. 20x faster than Claude Code. More robust by default. Right now, he is putting out fires, seeing hundreds of new users every hour and turning around couch cushions to find more compute. If you work at GCP or Amazon Web Services (AWS), please reach out. Oh also, go try http://weavemind.ai

    • No alternative text description for this image
  • Seldon Lab reposted this

    Seldon Batch 2 Demo Day is one week away. Six companies building the future of AI security, pitching in San Francisco on April 16. Joined on stage by three people we've been waiting to get in a room together: 𝐆𝐞𝐨𝐟𝐟 𝐑𝐚𝐥𝐬𝐭𝐨𝐧 - founder of SAIF, former President of Y Combinator 𝐄𝐦𝐦𝐞𝐭𝐭 𝐒𝐡𝐞𝐚𝐫 - co-founder/CEO of Softmax, co-founder of Twitch 𝐉𝐨𝐬𝐡𝐮𝐚 𝐀𝐜𝐡𝐢𝐚𝐦 - Chief Futurist at OpenAI Limited spots available. Waitlist in comments. Esben K., Finn Metz, Nick Fitz, Griffin Bohm, Mike Mahlkow, Apart Research, BlueDot Impact, Shawn K., James Alcorn, Lukas Petersson, Kristian Rönn, Tobias Nilsson-Roos

    • No alternative text description for this image
  • Seldon Batch 2 Demo Day is one week away. Six companies building the future of AI security, pitching in San Francisco on April 16. Joined on stage by three people we've been waiting to get in a room together: 𝐆𝐞𝐨𝐟𝐟 𝐑𝐚𝐥𝐬𝐭𝐨𝐧 - founder of SAIF, former President of Y Combinator 𝐄𝐦𝐦𝐞𝐭𝐭 𝐒𝐡𝐞𝐚𝐫 - co-founder/CEO of Softmax, co-founder of Twitch 𝐉𝐨𝐬𝐡𝐮𝐚 𝐀𝐜𝐡𝐢𝐚𝐦 - Chief Futurist at OpenAI Limited spots available. Waitlist in comments. Esben K., Finn Metz, Nick Fitz, Griffin Bohm, Mike Mahlkow, Apart Research, BlueDot Impact, Shawn K., James Alcorn, Lukas Petersson, Kristian Rönn, Tobias Nilsson-Roos

    • No alternative text description for this image
  • Seldon Lab reposted this

    We helped Anthropic conduct alignment testing of Claude Mythos Preview. We found that Mythos appears to represent a further shift in the direction of increased aggressiveness in business practices that we previously found for Claude Opus 4.6. More details in Anthropic's model card – link in comments.

    • No alternative text description for this image
  • We took our batch 2 founders to Yosemite for a Seldon retreat. We worked on fundraising and pitching, but the conversations that mattered most were the ones about the bigger picture. Why AI safety. What the endgame looks like. What our place in it is. These teams have been moving fast. It was good to slow down for a couple days. Demo Day is April 16th.

    • No alternative text description for this image
  • Seldon Lab reposted this

    Time to exploit in 2018 was 771 days. Now it's 0. Offense is accelerating much faster than defense, and we have empirical data to show this. What the data tells us is more interesting than any single vulnerability. Cyber is fundamentally changing, and time to exploit is a good proxy to show how. We already know about too many vulnerabilities. Even if we patched everything today, there's more tomorrow. We're producing more code than ever, and most attacks still start with phishing. More discovery doesn't move the needle. What we need is a fundamentally more efficient way to build resilience. My theory of change is to democratise advanced offensive capabilities to trusted organisations, giving them the same tools that will be used against them. Continuous agentic attack and defence, running in your environment, improving how you detect and respond in real time. When you think about where AI is heading, towards superintelligence, this is how you protect against it. Not by finding every vulnerability, but by building organisations that can take the hit and respond before it becomes a crisis. We need more people thinking about how we can scale defence, and this is fundamental to why we created 0Labs.

    • No alternative text description for this image
  • Seldon Lab reposted this

    What does it take to build AI responsibly as the technology moves faster than ever? In our latest spotlight, we meet Finn Metz, a former SCET Startup Semester student turned founder working at the forefront of AI safety. Originally from Germany, Finn came to UC Berkeley to deepen his experience in entrepreneurship and AI safety. Today, he is the co-founder of Seldon Lab, an accelerator investing in startups building the security, cyber, and research infrastructure needed for the age of AI. Read the full story as he shares why founders should take on the challenges of oversight, cyber resilience, and the broader societal impacts of AI with urgency, intention, and responsibility: https://hubs.li/Q04711hB0 #ucberkeley #aisafety #entrepreneurship #innovation #responsibleinnovation

    • No alternative text description for this image
  • Seldon Lab reposted this

    How do we know advanced AI systems actually work as intended? 🔍 Esben K. of Apart Research and Seldon works on model evaluation, auditing, and practical AI safety—focusing on how intelligent systems are tested, measured, and deployed responsibly. This work bridges research and real-world application, examining not just benchmark performance but behavior in production environments where reliability and accountability matter. As frontier systems scale, so must the infrastructure that evaluates them. Esben is joining us at FtC SF: Intelligence at the Frontier, March 14–15 in San Francisco—where we’re exploring the funding and coordination systems needed to steward intelligence at scale. 👉 luma.com/ftc-sf-2026

    • No alternative text description for this image
  • Seldon Lab reposted this

    I spent 3 years and a half finding every way AI systems break. Red teaming for OpenAI and Anthropic. Capability evaluations at METR. Automated red-teaming with PRISM Eval. Testing agents in production-like conditions at Trajectory Labs and watching them fail in ways nobody anticipated. Here's what I learned on the attack side: most failures aren't model failures. The model does something unexpected, sure, but the system had no review step, no recovery path, no clear guidelines on how to act outside of distribution, etc. The fix is usually obvious in hindsight. What's not obvious is why nobody built the infrastructure to make it easy yet. I watched smart teams duct-tape things together because the tooling doesn't exist to do it right. I got tired of pointing finger. So I decided to roll up my sleeves and build WeaveMind: A programming language to coordinate Humans, AIs and technology. Human reviews baked in via browser extension. Proper failure recovery. Hybrid deployment so sensitive steps stay local. The goal is to make the right way to build also the easiest and most powerful way, so you don't have to compromise. Early beta. Open sourcing Q2 2026. If you're having troubles getting your automation into production, DM me or link in comments.

    • No alternative text description for this image

Similar pages

Browse jobs