Sereact’s cover photo
Sereact

Sereact

Software Development

Embodied AI for robotics

About us

Sereact's mission is to drive your economic growth by closing automation gaps in your intralogistics. Our AI software enables machines to perceive their environment and develop solution strategies on their own, thus qualifying them to become autonomous skilled workers. By leveraging embodied AI for robotics, we ensure that our systems not only think but also physically interact with their surroundings in an intelligent manner. Sereact's AI software for autonomous robotics fully automates pick-and-place processes, making them more efficient, reliable and resilient. Our goal is to optimize your supply chain with minimal integration effort to increase productivity in your warehouse from the first pick while significantly reducing costs.

Website
https://sereact.ai
Industry
Software Development
Company size
51-200 employees
Headquarters
Boston(US) / Stuttgart (DE)
Type
Public Company
Specialties
Robotics, Industrial Automation, Deep Learning, Artificial Intelligence, Logistics, and Embodied AI

Locations

Employees at Sereact

Updates

  • Most robotics demos look impressive. Then the robot meets a real warehouse. New SKUs. Damaged packaging. Items that shift weight mid-pick. Surfaces the training data never included. Performance drops. Fast. This is the generalization problem in robotics. And it is not solved by bigger models or more simulation. Simulation approximates contact. It does not reproduce what happens when a deformed box meets a gripper at speed on a Tuesday night shift. Only real-world picks do that. Every pick in a live warehouse generates data that simulation cannot. Force feedback. Edge cases. The long tail of physical reality that no lab replicates. This changes the math on how robotics companies should be evaluated. The question is not "how good is the model." The question is "how many real-world interactions is the system learning from." Because at some point, deployed robots stop needing manual tuning for new environments. They adapt. Transfer improves. New SKUs take hours, not weeks. Generalization becomes a byproduct of operating at scale. Not a research milestone. An operational one.

  • View organization page for Sereact

    14,270 followers

    Partnership Extended Kardex & Sereact: 200 AI robots Kardex AS Solutions and Sereact have signed a Letter of Intent (LOI) to extend their technology partnership, targeting the rollout of up to 200 robotic systems over the coming years. This extension underscores the strong collaboration between both teams and lays the foundation for the joint development and deployment of advanced Pick & Place solutions, AI-powered vision systems, and (mobile) dual-arm robotic stations — particularly for complex returns handling and kitting use cases. The partnership is already backed by concrete customer projects, including MS Direct, Sonepar, and Wero, demonstrating the practical impact of our joint technology approach. Notably, MS Direct has already expanded its deployment, highlighting the scalability and performance of the solution. A key strategic focus moving forward is the continued development of our dual-arm setup, enabling higher flexibility, improved throughput, and enhanced handling of complex logistics processes. Together, we are advancing intelligent automation solutions for returns handling and kitting — two areas with significant growth potential in modern warehouse operations. “This partnership allows us to jointly push the boundaries of intelligent warehouse automation,” — Daniel Hauser, Head of Business Unit AS Solutions, Kardex “Together with Kardex, we’re scaling AI-driven robotics into real, high-impact logistics applications,” — Ralf Gulde, Co-Founder & CEO, Sereact Jörg Ziesmann Jens Pommerening Marco Pfendsack

    • No alternative text description for this image
  • Simulation gets contact physics wrong. Friction, compliance, transient contacts, force feedback. These are the variables that determine whether a grasp succeeds or fails. They are extremely difficult to model faithfully in synthetic environments. Policies trained primarily on simulation can perform well under controlled conditions. In live production environments, performance often degrades. Instead of optimizing for simulation benchmarks, we optimize for production reality. Today, 100+ Sereact robots operate across live customer sites. Every interaction, successful picks, failures, recoveries, is captured with synchronized visual streams, force telemetry, proprioceptive signals, and action trajectories. Not just outcomes. The full causal structure of execution. This data continuously flows into Cortex. It is curated for novelty and uncertainty, used to retrain and validate policies, and then redeployed across the fleet. As real-world interaction data scales: • Task competence improves • Convergence time for new tasks decreases • Transfer across robot embodiments strengthens Generalization becomes an operational property of the system, not a benchmark metric. We call this the Real-World Learning Loop. We’ve published a technical breakdown detailing how it works, including cross-embodiment transfer results and why real-world interaction data compounds in ways simulation cannot. Full article below.

    • No alternative text description for this image
  • We didn’t start with humanoids. And that was deliberate. Not because we didn’t believe in them. But because generalization comes first. Humanoids amplify intelligence. They don’t create it. We started in warehouses. High repetition. High variation. Real constraints. That’s where you stress-test grasping, recovery, sequencing, and long-tail edge cases at scale. Now we’re deploying humanoids. Not as a demo. But as the next logical embodiment of a system that already generalizes.

  • General-purpose robots are not born general. They become general. Not by scaling models alone. By expanding experience in the right order. At Sereact, generalisation is a strategy, not an outcome. We begin with warehouse pick-and-place because it concentrates reality. Dense interaction. Clear feedback. Endless variation in objects, clutter, and contact. This is where robots learn the fundamentals that transfer everywhere else. Visuomotor alignment. Grasp and recovery. Closed-loop interaction under pressure. From there, we extend capability step by step. Returns handling adds uncertainty and deformables. Dual-arm systems add coordination and role separation. Kitting and sequencing add long-horizon structure. Manufacturing adds process constraints and contact-rich execution. Each step stays adjacent to what came before. No discontinuities. No resets. As the distribution expands, capability compounds. By the time robots operate in semi-structured or human environments, generalisation is no longer the question. Coverage and density are. Generalisation is built, not discovered. This is our strategic path to generalisation. Full article on the blog.

    • No alternative text description for this image
  • Generalization is not one dataset. And it never was. The idea that you can “just collect enough data” and suddenly get general-purpose robotics is appealing. It’s also wrong. Real-world generalization doesn’t fail because models lack information. It fails because the distribution is incomplete. Robots don’t operate in a flat data space. They operate across interaction regimes shaped by: > contact and force > sequencing and recovery > wear, drift, and long-tail variation Adding more data from the same regime doesn’t expand capability. It just makes the model more confident inside its comfort zone. Generalization emerges when systems are exposed to adjacent interaction regimes: > tasks that overlap but introduce new constraints > environments that force recovery, not just success > executions where “almost working” matters more than outcomes This is why progress in real-world robotics isn’t a jump. It’s a controlled expansion across tasks, environments, and embodiments. And it’s why the next step isn’t “a bigger dataset”. We’ll break this down in the next blog post.

  • We just released Cortex 1.6. A new way for Robots to learn from real-world execution, not just success or failure. A warehouse pick doesn’t fail when the item drops. It fails earlier. Vacuum margin decays. Retries start. Cycle time inflates. The grasp becomes fragile long before it becomes a failure. Most learning systems still reduce that whole sequence to one bit. Success or failure. Cortex 1.6 is about learning from the way things unfold. Today we’re introducing the Process-Reward Operator (PRO). A learned reward model that extracts dense learning signal directly from real deployment telemetry, not hand built rewards or simulation proxies. PRO turns execution into three continuous signals: Progress. Are we moving toward completion or drifting away? Completion likelihood. How likely is the current trajectory to succeed? Risk. How likely is instability or failure, like slips, drops, or jams? Why this matters Robots generate rich operational data in production. Forces, vacuum margins, retries, timing, proprioception, WMS context. Until now, most of that signal was discarded. With PRO, routine operation becomes training data. Not just rare terminal failures. Not just explicit human interventions. Results from live production deployments across pick and place, shoebox opening, and returns handling: • Success rate: ~98% (Cortex 1.6) vs ~95% (Cortex 1.5) vs ~80–82% (imitation baseline) • Convergence time: 9h vs 15h vs 24h on pick and place. 14h vs 27h vs 45h on shoebox opening. 26h vs 48h vs 80h on returns handling • Recovery success after an initial failure: ~80% vs ~65% vs ~45% • Average retries per episode: down 30–50%

  • This is what scale actually looks like. Not a single demo. Not one perfect setup. Many robots. Many sites. Running real work. At the same time. This is where things change. Because once you operate at this level, progress stops being fragile. Improvements don’t live in one warehouse. Fixes don’t stay local. Learning doesn’t reset. What you’re seeing here isn’t hardware. It’s a system that compounds. Every deployment adds signal. Every update gets amortised. Every new robot starts ahead of the last one. This is the difference between running robots and running a learning system at scale. And once you cross that line, there’s no going back.

Similar pages

Browse jobs

Funding

Sereact 2 total rounds

Last Round

Series A

US$ 26.0M

See more info on crunchbase