Liquid AI’s cover photo
Liquid AI

Liquid AI

Information Services

Cambridge, Massachusetts 33,757 followers

We build efficient general-purpose AI at every scale.

About us

We build efficient general-purpose AI at every scale.

Website
http://liquid.ai
Industry
Information Services
Company size
51-200 employees
Headquarters
Cambridge, Massachusetts
Type
Privately Held
Founded
2023

Locations

Employees at Liquid AI

Updates

  • Liquid AI reposted this

    View profile for Ramin Hasani

    Liquid AI24K followers

    I’m proud to announce our new partnership with Mercedes-Benz AG to bring embedded, on-device intelligence to Mercedes-Benz vehicles starting in North America vehicles. This marks an important step toward making in-car AI more capable, more responsive, and more useful in everyday driving. At Liquid AI, we believe the future of intelligence in the physical world depends on models that are fast, private, efficient, and able to run directly on the hardware already inside the system. In the vehicle, that means advancing speech, language understanding, and reasoning enabling more natural and robust conversational experiences for drivers and passengers. The software-defined vehicle is one of the most consequential real-world deployments of AI, and Mercedes-Benz has approached it with exactly the rigor this challenge deserves. Proud of what our teams are building together, and excited for the road ahead as we work toward production deployment in the second half of 2026. I am also deeply grateful of the support of the Mercedes-Benz executives, Joerg Burzer, Magnus Östberg, and Jason Hoff to kick off this innitiative, with many impactful future prospects. Press: https://lnkd.in/gQacCDw4 Blog: https://lnkd.in/gniyZRzN

    • No alternative text description for this image
  • Mercedes-Benz AG 🤝 Liquid AI! ✨ Read the details: https://lnkd.in/gCrAfk83

    Embedded AI: private, fast and intelligent. Here’s how: 🤖 Mercedes-Benz is partnering with Liquid AI to bring on-device intelligence directly into our vehicles with third- and fourth-generation MBUX in North America. This is embedded AI running on-device with low latency and without constant cloud exchange. Liquid AI’s Liquid foundation models (LFMs) deliver natural speech, language understanding and reasoning where it matters most: inside your Mercedes-Benz. I recently shared the stage with Liquid AI's CEO Ramin Hasani at the McKinsey & Company AI CEO Summit. Now, the collaboration between our companies allows the MBUX Virtual Assistant and other in-car experiences in North America to evolve even further. By processing speech output on-device, embedded AI can deliver a more consistent experience in everyday driving situations. With MB.OS enabling deep integration, we are targeting a first production deployment for speech technology as early as the second half of this year. This once again underscores the significance of our in-house developed operating system: It enables us to add features seamlessly, bringing partner innovations to series production quickly.    I’m very much looking forward to this next generation of in-car AI experience. What do you think of it? 

    • No alternative text description for this image
  • Today, I am excited to announce a strategic partnership that will integrate embedded AI for Mercedes-Benz models with third- and fourth-generation MBUX in North America with Liquid AI. By processing language understanding and reasoning on-device, we are bringing advanced speech and reasoning capabilities directly into our vehicles. This complements cloud-based Large Language Models and compute ecosystems by bringing efficient embedded intelligence directly to Mercedes-Benz models with third- and fourth-generation MBUX in North America. It will further evolve the MBUX Virtual Assistant (MVA) and related in-vehicle experiences.   Thanks to the power of MB.OS, we are able to deploy our partner’s speech capabilities as early as the second half of this year. Many thanks to our partners at Liquid AI and Ramin Hasani, as well as our dedicated teams at Mercedes-Benz for making this possible.  

    • No alternative text description for this image
  • We’re entering a multi-year partnership with Mercedes-Benz AG to scale embedded, on-device intelligence for their third- and fourth-generation MBUX. Our goal: to make the driver/vehicle relationship even more natural and effortless. Using our Liquid Foundation Models, Mercedes-Benz will bring essential elements of the in-vehicle voice interaction stack on board, enabling customer-ready experiences across speech, language understanding, and reasoning, starting with their models in North America. By focusing on low latency and high efficiency, our models support conversational interactions without relying on continuous data exchange with the cloud and will be in production as early as the second half of this year. We’re proud to partner with Mercedes-Benz and thank Joerg Burzer, Magnus Östberg, and Jason Hoff for their support! As our CEO Ramin Hasani noted: “The software-defined vehicle is one of the most consequential deployments of AI in the physical world, and Mercedes-Benz has approached it with exactly the rigor it demands.” Read more about our partnership: https://lnkd.in/gCrAfk83

    • No alternative text description for this image
  • View organization page for Liquid AI

    33,757 followers

    Our team is excited to be in Rio de Janeiro for #ICLR2026 to share new research and technical breakthroughs in model reasoning and efficiency, including our published findings on: - LLMs’ current capacity for introspection, including ability to anticipate their own success and the computation required to achieve it. - A new comparative study of reasoning LLMs, plus a novel analysis framework that quantifies reasoning paths and captures their qualitative changes under each training process - DynaProt, a lightweight, SE(3)-invariant framework that predicts rich descriptors of protein dynamics directly from static structures If you’re interested in learning more, especially if you’re dedicated to training world-class multimodal models, we’d love to meet you. Visit us at booth #306 to meet with our colleagues Alexander Amini, Maxime Labonne, Edoardo Mosca, Fernando Fernandes Neto, T. Konstantin Rusch, Song Duong, and Neehal Tumma! Our accepted papers at ICLR include: - RL Squeezes, SFT Expands: A Comparative Study of Reasoning LLMs - Zero-Overhead Introspection for Adaptive Test-Time Compute - Learning residue level protein dynamics with multiscale Gaussians - Antislop: A Comprehensive Framework for Identifying and Eliminating Repetitive Patterns in Language Models - The Curious Case of In-Training Compression of State Space Models - The Key to State Reduction in Linear Attention: A Rank-based Perspective AlphaQ: Calibration-Free Bit Allocation for Mixture-of-Experts Quantization - Low-Pass Flow Matching Link in the comments!

    • No alternative text description for this image
  • View organization page for Liquid AI

    33,757 followers

    Today, we're releasing LFM2.5-VL-450M, a more capable version of our smallest vision-language model, built for edge deployment. It processes a 512×512 image in 240 ms — fast enough to reason about every frame in a 4 FPS video stream. It builds on LFM2-VL-450M with three new capabilities: bounding box prediction (81.28 on RefCOCO-M), multilingual visual understanding across 9 languages (MMMB: 54.29 → 68.09), and function calling support. Most production vision systems are still multi-stage: a detector, a classifier, heuristic logic on top. This model does it in one pass — locating objects, reasoning about context, and returning structured outputs directly on-device. It runs on Jetson Orin, Samsung S25 Ultra, and AMD 395+ Max. Open-weight, available now on Hugging Face, LEAP, and our Playground. Read more: https://lnkd.in/dXPTzNvW

    • No alternative text description for this image
  • Liquid AI reposted this

    View organization page for Liquid AI

    33,757 followers

    Today, we release LFM2.5-350M. Agentic loops at 350M parameters. LFM2.5-350M was trained for reliable data extraction and tool use. At <500MB when quantized, it is built for environments where compute, memory, and latency are particularly constrained. Trained on 28T tokens with scaled RL, it outperforms larger models like Qwen3.5-0.8B in most benchmarks; while being significantly faster and more memory efficient. - Runs across CPUs, GPUs, and mobile hardware - Fast, efficient, and low-latency - Reliable function calling and agent workflows - Consistent structured outputs you can depend on From day one, it’s deployable across your full stack, from on-device to production systems,  with support across key partners: – Hardware: AMD, Intel, Qualcomm – On-device: LM Studio, Cactus (YC S25), RunAnywhere (YC W26), ZETIC, Mirai Tech Inc – Customization: distil labs LFM2.5-350M: a small model, built for real workloads. Read more: https://lnkd.in/g53tBb39

  • View organization page for Liquid AI

    33,757 followers

    Today, we release LFM2.5-350M. Agentic loops at 350M parameters. LFM2.5-350M was trained for reliable data extraction and tool use. At <500MB when quantized, it is built for environments where compute, memory, and latency are particularly constrained. Trained on 28T tokens with scaled RL, it outperforms larger models like Qwen3.5-0.8B in most benchmarks; while being significantly faster and more memory efficient. - Runs across CPUs, GPUs, and mobile hardware - Fast, efficient, and low-latency - Reliable function calling and agent workflows - Consistent structured outputs you can depend on From day one, it’s deployable across your full stack, from on-device to production systems,  with support across key partners: – Hardware: AMD, Intel, Qualcomm – On-device: LM Studio, Cactus (YC S25), RunAnywhere (YC W26), ZETIC, Mirai Tech Inc – Customization: distil labs LFM2.5-350M: a small model, built for real workloads. Read more: https://lnkd.in/g53tBb39

  • AI is beginning to move beyond the clouds… 🚀 Today we’re opening registration for Hack #05: AI in Space, in collaboration with DPhi Space. This hackathon explores what becomes possible when efficient AI models operate closer to satellites, orbital systems, and space-based data. We’re inviting developers, researchers, and space enthusiasts to experiment with new applications, from satellite data intelligence to autonomous systems operating beyond the atmosphere. If you’re curious about how AI may function as part of future space infrastructure, this is a place to explore and build. Register → https://luma.com/n9cw58h0 Learn more → https://lnkd.in/eZcxT8RX Join the conversation → https://lnkd.in/efFCfhWP

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Liquid AI 3 total rounds

Last Round

Series A

US$ 250.0M

See more info on crunchbase