It's day 2 at #CES2026! We're excited to be on the show floor with our partners at Foretellix in LVCC West Hall, Booth 4767, talking with teams about what it really takes to scale AV safely. If you’re at CES too and building AV systems, we'd love to meet and dig into how we're bringing high-fidelity reconstructions and synthetic data generation to AV developers, helping teams save weeks of engineering time and millions in compute dollars. See you on the floor! 🏎️
Voxel51
Software Development
Ann Arbor, Michigan 38,597 followers
The most powerful visual AI and computer vision data platform.
About us
Voxel51 is the most powerful visual AI and computer vision data platform. Voxel51 streamlines visual data curation and model analysis with workflows to simplify the labor-intensive processes of visualizing and analyzing insights during data curation and model refinement—addressing a major challenge in large-scale data pipelines with billions of samples. With over 3 million open source installs and customers like Walmart, GM, Bosch, Medtronic, and the University of Michigan Health, FiftyOne is an indispensable tool for building computer vision systems that work in the real world, not just in the lab.
- Website
-
https://voxel51.com
External link for Voxel51
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- Ann Arbor, Michigan
- Type
- Privately Held
- Founded
- 2018
Locations
-
Primary
Get directions
330 E Liberty St
Ann Arbor, Michigan 48104, US
Employees at Voxel51
Updates
-
Join our virtual meetup on Mar 5 to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Register here: https://lnkd.in/gr_KMH54 Talks will include: * MOSPA: Human Motion Generation Driven by Spatial Audio - Zhiyang (Frank) Dou at Massachusetts Institute of Technology * Securing the Autonomous Future: Navigating the Intersection of Agentic AI, Connected Devices, and Cyber Resilience - Samaresh Singh at HP * Transforming Business with Agentic AI - Joyjit Roy at Kforce Inc. * Plugins as Products: Bringing Visual AI Research into Real-World Workflows with FiftyOne - AdonAI Vera at Voxel51
-
Voxel51 reposted this
🤖 Beyond the Robot Hype at CES 2026 🤖 The CES showroom opened today and if you're walking the floor in Las Vegas, prepare for thrilling humanoid robot demos and the promises we've been hearing since the 1960s. In my new article for Fortune, I explain why home robots need to follow the self-driving car playbook—and why we're further from Rosie the Robot in The Jetsons than the current hype suggests. Here's the thing - these systems need to work 99.999% of the time when the stakes involve our kids, aging parents, and whether the stove gets left on. Plus, they need massive amounts of intimate data about our homes. What would be better is to follow what worked for cars. Solve one problem at a time, keep humans in the loop, and earn trust before demanding it. Read the full piece: https://lnkd.in/d3baX4-d #CES2026 #AI #Robotics #AutonomousSystems University of Michigan College of Engineering University of Michigan Robotics Department Michigan AI Laboratory Voxel51 Sharon Goldman
-
-
Voxel51 reposted this
NVIDIA is going all in on physical AI. At #CES, they showed how models, simulation, and hardware are finally coming together to power real-world systems. A standout moment was Alpamayo driving through downtown SF with its reasoning visible in real time, plus updates across Cosmos and Isaac GR00T focused on simulation-first physical AI. At Voxel51, we’re excited to help enable this shift by building better data and evaluation pipelines with the team. Feels like a big moment for Physical AI. 🚀 Daniel Gural, Jimmy Guerrero, Ryan Patrick Sweeney
-
-
Voxel51 is heading to #CES2026! Find us at CES LVCC West Hall, Booth 4767, for our partnership demo with Foretellix. Stop by to see how you can transform real-world drive logs into high-fidelity, simulation-ready datasets. Join us for an automotive happy hour with NVIDIA, Foretellix, and Parallel Domain. 📅 Jan 7, 3:30 PM 📍 LVCC West Hall Booth 4767 🔗 RSVP: https://luma.com/4lmhl4na Can't make it? Learn more about Physical AI Workbench: https://lnkd.in/edpdQg2F page
-
-
Voxel51 reposted this
Join us on Feb 12 for the #Seattle AI, ML and Computer Meetup in Bellevue! Register to reserve your spot: https://lnkd.in/gjwA9U9u Talks will include: * The World of World Models: How the New Generation of AI Is Reshaping Robotics and Autonomous Vehicles - Daniel Gural at Voxel51 * ALARM: Automated MLLM-Based Anomaly Detection in Complex-EnviRonment Monitoring with Uncertainty Quantification - Congjing Zhang at University of Washington * Modern Orchestration for Durable AI Pipelines and Agents - Flyte 2.0 - Sage Elliott at Union.ai * Context Engineering for Video Intelligence: Beyond Model Scale to Real-World Impact - James Le at TwelveLabs * Build Reliable AI apps with Observability, Validations and Evaluations - Hoc Phan at Okahu.ai *********** Want to build better computer vision models? FiftyOne is an open source toolkit from Voxel51 (our Meetup sponsor) that helps you curate datasets, evaluate model performance, visualize embeddings, catch annotation errors, and eliminate duplicate images—all in one place. “pip install fiftyone” is all it takes to get started - https://docs.voxel51.com/ #computervision #ai #artificialintelligence #machinevision
-
-
Voxel51 reposted this
🎊 Happy new year 2026! 🔮 I predict that this year we will be playing with "agents" a lot more. 🧩 What are they, how do they work, and do they work as promised! If you are a student at Arizona State University, and would like to get hands-on with industry toolkits and workflows, we invite you to join us at the "Agents World: Visual AI Hackathon at ASU", with AdonAI Vera of Voxel51, on March 21st 2026, at The GAME School at ASU. The built-in intro workshop at 10:30am will get you up to speed. Schedule • 10:00 AM – Welcome and introduction • 10:15 AM – Find teammates & pip install fiftyone • 10:30 AM – Workshop challenges with special prizes Workshop focus on Generative Agents: exploring how agents can help us understand what’s happening in our data, including visual reasoning, synthetic images and videos, multimodal insights, and more. • 12:30 PM – Lunch • 1:00 PM – Plugin tracks: define your use case • 1:30 PM – Hacking! • 3:00 PM – First push to GitHub • 5:00 PM – Final push to GitHub & judging begins Registration open now! https://lnkd.in/gEMKKJA3 We will also be hosting a AI-meetup the evening of March 20th with Jimmy Guerrero, prior to the hackathon, with spotlights from leading AI faculty at ASU. Come socialize with your faculty and learn more about their work. ✪ Lalitha Sankar, Assuring Performance of Diffusion Models for Imbalanced Datasets ✪ Ransalu Senanayake, Agents for Physical Robotics ✪ Abhishek Singharoy, Machine Learning for Biophysics ✪ Mark Ollila, Agentic AI for Games ✪ 'YZ' Yezhou Yang, Towards Controllable and Explainable Visual GenAI for Creativity ✪ Giulia Pedrielli, Agents and Digital Twins for the Manufacturing Sector ✪ Adam Nocek, Where Agentic AI and Game Design Collide: Perspectives from Philosophy ✪ Dr. Kobi Abayomi, Machine Learning for the Music Recommendation Industry AdonAI Vera / Jimmy Guerrero / Pooyan Fazli / Paula Ramos, PhD / Ross Maciejewski / Bruno Sinopoli / Lev Gonick / Sean Hobson, Ph.D. / Mark Ollila / Lalitha Sankar / Ransalu Senanayake / Vivek Gupta / Giulia Pedrielli / Visar Berisha / Gautam Dasarathy / Tejaswi Gowda / Renee Cheng / Sandra Stauffer / Ashley Campbell / Chelsea Phillips, MPA / Sami Mian, Ph.D. / Don Fotsch / Rakshith Subramanyam / Dan Bliss
-
-
Check out our community plugins and integrations at: https://lnkd.in/eF4FbNSD
2025 had 261 work days. in that time i shipped 116 integrations for fiftyone. 66 datasets 38 models 12 plugins that's not to mention the various workshops, virtual events, and in-person meetups held around the world in places like ann arbor, boston, chicago, munich, berlin, dusseldorf, paris (got to meet Merve Noyan and Vaibhav Srivastav), amsterdam (got to meet Tuana Çelik), brussells (got to meet Niels Rogge), stuttgart, saarlands, and more here's a quick summary of what i focused on: --gui agents-- this was the year gui agents took off. i built the data infrastructure to support it—17 GUI grounding datasets, 6 visual agent models (GUI Actor, ShowUI, UI-TARS, OS-Atlas, MiMo-VL), plus tooling to collect, synthesize, and evaluate gui data inside fiftyone if you're training agents that see and click, you need to debug what they see. --document visual ai-- enterprise wants multimodal RAG that works on real documents. i integrated ColPali, ColQwen, Jina v4, the ModernVBERT variants—alongside ocr models and document datasets spanning forms, receipts, and scanned text. fiftyone now handles visual document retrieval end-to-end. --plugins-- datasets and models are useless without workflows. i built a gui dataset collector for capturing and annotating screen interactions in coco format. a lerobot importer that preserves multi-camera views and trajectory metadata. a wandb plugin for tracking training data and model predictions with full lineage. text evaluation metrics (ANLS, CER, WER) for benchmarking OCR. NVIDIA NeMo Retriever Parse for extracting structured text with bounding boxes. the plugins close the loop between data curation, model evaluation, and experiment tracking. --vision language models-- the vlm landscape moves fast. I kept fiftyone current: Qwen2.5-VL, Florence2, Kimi-VL, PaliGemma2, MedGemma, Nemotron Nano, FastVLM, MiniCPM-V, Moondream3, Qwen3VL. each one integrated into the remote model zoo so you can run inference on your data in a few lines. whatever vlm you're using, it should work with your data tooling. --physical ai-- early investment in physical AI. built a lerobot dataset importer and started exploring how fiftyone can support policy evaluation and failure analysis for robotic manipulation tasks. The importer handles multi-camera views, episode grouping, and full trajectory metadata—joint states, actions, velocities, efforts. where things are heading in 2026? and how can i make fiftyone useful for where visual AI is actually going? i have a hunch that vision language action models is the next wave
-
Voxel51 reposted this
Voxel51 is hiring! What's awesome about working here? Our product, our people, and our growth. I know I know, everyone says that. I guess you'll have to reach out to learn more about how special this place is! 🙂 We have 3 specific roles open - ✨ Content Marketing Manager ⭐ Product Marketing Manager 🌟 Senior Full Stack Engineer, FE leaning (React is a requirement) (We're hiring in the US and Canada, working remotely, and finding time to connect in person ~2 times per year.) If you're a good fit, please apply through our career page or send your resume to me directly - remy@voxel51.com. Please use the subject line "Per LinkedIn" and make sure to include your LI profile and a resume. Thanks! https://lnkd.in/gxP8n9np
-
Voxel51 reposted this
Are you still wasting all of your money labeling all of your data? 🛑 💰 🛑 We'll see you in Tucson in March at #WACV2026! The ML team Voxel51 will be presenting our state of the art work, "Zero-Shot Coreset Selection: Efficient Pruning for Unlabeled Data," in practical coreset selection (with no expensive training). 🚀 Training contemporary models requires massive amounts of labeled data. Despite advances in weak and self supervision, the common practice still relies on labeling everything and using full supervision for production models. But much of that labeled data is redundant—you don’t need to label it all! Our method, Zero-Shot Coreset Selection (ZCore), sets a new state-of-the-art for selecting which part of your unlabeled data to label, without sacrificing performance compared to using the entire labeled set. Bottom line: ZCore saves money on annotation, speeds up training, and outperforms all existing coreset selection approaches for unlabeled data—as well as most that rely on labeled data. The core method in ZCore is available open source and in #FiftyOne; and coming to FiftyOne Enterprise with more features in an upcoming release! Check it out: 📄 Paper Link: https://lnkd.in/eHqhVmwR 💻 GitHub Repo: https://lnkd.in/eyFbCQeD Brent Griffin · Jacob Marks · Voxel51 · Michigan AI Laboratory · University of Michigan Robotics Department · Electrical and Computer Engineering at the University of Michigan · Computer Science and Engineering at the University of Michigan
-