Pathway’s cover photo
Pathway

Pathway

Software Development

Palo Alto, California 22,135 followers

Pathway builds the first post-transformer frontier model that solves AI’s fundamental memory problem.

About us

At Pathway we are shaking the foundations of artificial intelligence by introducing the world's first post-transformer model that adapts and thinks just like humans. Our breakthrough architecture outperforms Transformer and provides the enterprise with full visibility into how the model works. Combining the foundational model with the fastest data processing engine on the market, Pathway enables enterprises to move beyond incremental optimization and toward truly contextualized, experience-driven intelligence. We are trusted by organizations such as NATO, La Poste, and Formula 1 racing teams.

Website
https://www.pathway.com
Industry
Software Development
Company size
11-50 employees
Headquarters
Palo Alto, California
Type
Privately Held
Specialties
artificial intelligence, data processing, LLMs, and models

Locations

Employees at Pathway

Updates

  • Pathway reposted this

    San Francisco will be electric on May 5th – with the inventors of the Transformer Architecture and the Post Transformer Architectures in a Boxing Ring! 🥊🐉 This is the battle between innovations that shape trillion dollar markets - presented by their very authors (!). On Tuesday, May 5th in SF, Pathway will host this boxing-style debate between the biggest inventors behind the foundations of frontier models. Fighters confirmed in the corners:  🥊 Lukasz Kaiser — Inventor of the Transformer architecture. Co-creator of ChatGPT, o1/o3, TensorFlow. Leader at OpenAI.  🥊 Mathias Lechner — CTO of Liquid AI, pioneering liquid neural networks.  🥊 Adrian Kosowski — CSO of Pathway and co-creator of the Dragon Hatchling (BDH) post-transformer architecture.  I’ll be moderating with Dexter Horthy. More announcements soon…   Expect punchy rounds, crowd-fueled energy, excellent food and drinks, and a line-up of attendees so cool that your heart will burst. All while going deep on what transformers got spectacularly right and where they’re starting to break at scale.    Registration is by approval (curated room of the right people). Link here 👇 https://lnkd.in/gkN_9ag2   The post-transformer era isn’t coming.   It’s here. Let’s debate it, stress-test it, and build it. 🐉  

    • No alternative text description for this image
  • Pathway reposted this

    View profile for Zuzanna Stamirowska

    Pathway11K followers

    The Dragon has landed 🐉. On Friday, we interacted with researchers at Vector Institute as a part of Jan's 40th birthday seminar series 🎂. This series was a lot of fun, intense, and crucial. The post-transformer discourse and us, the creators behind it, traversed through the halls of New York University, Harvard University, Massachusetts Institute of Technology, Mila - Quebec Artificial Intelligence Institute , Stanford University, and Vector Institute – within 10 days. Across these 6 institutes, there was one focus: understanding the missing link between the transformer and the brain. We are convinced that the path to superintelligence leads through an architecture in which memory lives within the model network itself, as presented in The Dragon Hatchling (BDH) paper. Bigger context windows or longer chains of thoughts or several less understood agents working with each other, burning the planet while at it 🌎🔥, won't get us there. In this discussion at Vector, Jan Chorowski walked us through the core of BDH: neurons and synapses within frontier models; brain-like sparsity and Hebbian learning; local computation that scales without coordination bottlenecks; and why we believe BDH is a state-space model done right. Jan closed with a peek into Baby Dragon™, our BDH-based reasoning model, and the three pillars we're building toward: continual learning with experience, generalization over time, and enterprise-native intelligence. Huge thanks to Chris Maddison, Fatima Khamitova, Johannah Thumb, and Lucas Dinh, Claire Nouet for making this possible, and to everyone who joined us across the roadshow. Once again: Happy Birthday, Jan!!! While this series wraps, the conversation continues. In 2 weeks, we’re bringing the inventors of the Transformer and Post Transformer architectures under one roof at SF 🥊🥊 Next stop: San Francisco – Townsend Street.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Pathway reposted this

    Harvard/MIT → MILA → Stanford, and this Friday, Vector Institute. The Dragon’s final stop (for now…) 🐉 Jan Chorowski will cover the key aspects driving the transition towards the post-transformer era, zooming in on continual learning in reasoning models – a key aspect of BDH, and one of the ultimate problems in frontier models. This will be an in-person interaction at Vector Institute. There’s something fitting about closing this seminar series here. It’s the city that helped shape modern deep learning and an institution co-founded by Jan’s senior co-author from his Google Brain days, Nobel laureate Geoff Hinton. If you’re at Vector, faculty, postdoc, student, or affiliate, we’d love to meet you! Thanks Fatima Khamitova, Chris Maddison, Johannah Thumb, and Lucas Dinh for enabling this! *Part of Jan’s 40th birthday seminar series on BDH 🎂

    • No alternative text description for this image
  • Pathway reposted this

    When BDH came to Stanford University, there was fire 🔥 Yesterday, students, postdocs, faculty, and affiliates came together around one central question: if stronger reasoning depends on memory, how should memory be built into the architecture itself? That sat at the centre of Jan Chorowski’s seminar. One of the main topics of the talk was why synaptic memory matters. Classical RNNs kept too little working state. Transformers went to the other extreme, letting short term memory grow without bound. That is why so many memory workarounds keep appearing. BDH is fundamentally different. In the Dragon Hatchling (BDH), memory grows within the synapses between neurons, allowing the model to accumulate patterns through use instead of treating every interaction as a fresh start. That is part of what makes continual learning possible. Test time training. The intuition is simple: in natural systems, it is the network that remembers. 🔹 Network stores the memory. 🔹 Network carries the function. 🔹 Neurons do the computations, but the connection pattern is what gives the system continuity. This is the direction behind Pathway’s BDH, a brain-like network inside a frontier reasoning model. This is how we, as a neolab, are building the next generation of models around memory, continual learning, and long horizon reasoning. Thank you to Stanford ACM for helping make the seminar happen! Special shout-out to Suze V. Next stop: Vector Institute *Part of Jan's 40th birthday 🎂 seminars series :-)

    • No alternative text description for this image
  • Pathway reposted this

    BDH’s 97.4% Sudoku result, and the gravitas of that result, explained by the one and only Jon Krohn on SuperDataScience! Just a 9 minute listen. For comparison, leading LLMs score around 0%. BDH achieves this: 🔹 while maintaining language fluency 🔹 without chain-of-thought 🔹 without external tools 🔹 at a much lower cost than today’s reasoning-heavy approaches Jon, thank you so much for your crisp explanation! Link to the episode: https://lnkd.in/g5zY9NDM

    • No alternative text description for this image
  • Pathway reposted this

    BDH reasoning model reaches 97.4% accuracy at solving Extreme Sudoku puzzles while leading LLMs are close to 0% accuracy! Remember: BDH is also a language model. BDH achieves this without chain-of-thought, without external tools, and at materially lower cost than current reasoning-heavy approaches… Like AlphaGo, the point is not the game itself. The point is what the game reveals about the capabilities of our architecture. Sudoku is hard to fake. It is a tightly constrained problem. You have to hold multiple possibilities in mind, navigate interacting rules, backtrack when needed, and converge on a valid solution. Linguistic fluency is not enough. You either reason through the constraints, or you fail. This is also where today’s transformer-based systems begin to show their limits. Transformers reason through language. But many important problems do not live naturally in language. That is why we built BDH differently. BDH keeps what language models do well, but adds a larger internal reasoning space and intrinsic memory mechanisms that support continual learning and adaptation during use. That is the real shift: reasoning becomes native, not bolted on. Welcome to the Post-Transformer Era.

    • No alternative text description for this image
  • Pathway reposted this

    Pathway has been named one of Fast Company’s Most Innovative Companies today. We’ve believed from the beginning that memory and continuous learning are the missing pieces in today’s AI systems. Transformer-based models can generate, but they don’t learn from experience - they reset every time. Like in the Memento movie. At Pathway, we’ve taken a different path. With Dragon Hatchling (BDH), we’re building AI that can learn continuously, retain knowledge over time, and reason across longer horizons. To see this direction recognized at this level is incredibly meaningful. We’re grateful for the recognition, and very excited for what’s ahead. #FCMostInnovative

    • No alternative text description for this image
  • Pathway reposted this

    View profile for Zuzanna Stamirowska

    Pathway11K followers

    Attempting to do reasoning and continual learning with transformers is like trying to eat spaghetti with a spoon. What a conversation! I was privileged to bring together Martin Farach-Colton (New York University CS Chair, ACM/IEEE/SIAM Fellow), Julian Togelius (NYU professor, triple IEEE Fellow, NeurIPS Senior Program Committee), and our CSO Adrian Kosowski (PhD at 20, theoretical computer scientist and quantum physicist, former professor and researcher at École Polytechnique and Inria) to pressure-test where transformers break. Martin: “LLMs are remarkable until you push them just beyond their planning horizon, and then they tend to collapse catastrophically.” Julian: “If you use GPT or Gemini… it literally keeps a text file. It never changes the weight of the transformer. It never continuously updates itself. These are fundamental limitations.” Julian compared the current illusion of memory in LLMs to Memento. Amnesia. Notes on your own body. Martin: "the whole “language is intelligence” thing… is it general intelligence? It definitely is not general intelligence." The consensus was sharp: with transformers memory and reasoning are faked - they are illusions. These aren't bugs to patch. This is an architecture that has fundamental blockers. Adrian, the main author of the BDH architecture, talked about how continual learning was achieved with the new post-transformer architecture. Catastrophic forgetting deferred hundreds of times. No patches. No text files on the model's body. We covered far more than I can fit here, from how the fundamentals of Computer Science can impact the AI field to Space Invaders. Worth watching in full. Huge thanks to Julian, Martin, and Adrian for their time! Full discussion 👇 https://lnkd.in/gknzjsJM

    • No alternative text description for this image
  • Pathway reposted this

    View profile for Zuzanna Stamirowska

    Pathway11K followers

    It was so great to chat about our post-transformer AI architecture #BDH with Corey Noles and @Grant Harvey from The Neuron - AI News! Thank you for having me! I believe that this podcast is a nice intro to the post-transformer era and BDH for everybody - no matter their tech level. I hope you enjoy it. The full episode is available here https://lnkd.in/gY2ExkbC *On the photo: me passionately explaining particle interaction systems 😅

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Pathway 3 total rounds

Last Round

Seed

US$ 10.0M

See more info on crunchbase