Quinn and Thorsten announce that the Amp editor extension will soon self-destruct. The time of the sidebar is over.
In this episode, Quinn and Thorsten discuss how everything seems to have changed again with Gemini 3 and Opus 4.5 and what comes after — the assistant is dead, long live the factory.
In this episode, Beyang sits down with Camden to discuss how the Amp team evaluates new models: why tool calling is the key differentiator, how open models like K2 and Qwen stack up, what GPT-5 changes, and how qualitative "vibe checks" often matter more than benchmarks. They also dive into subagents, model alloys, and what the future of agentic coding looks like inside Amp.
In this episode, Beyang and Thorsten discuss strategies for effective agentic coding, including the 101 of how it's different from coding with chat LLMs, the key constraint of the context window, how and where subagents can help, and the new oracle subagent which combines multiple LLMs.
In this episode, Quinn and Thorsten discuss Claude 4, sub-agents, background agents, and they share "hot tips" for agentic coding.
In this episode, Beyang interviews Thorsten and Quinn to unpack what has happened in the world of Amp in the last five weeks: how predictions played out, how working with agents shaped how they write code, how agents are and will influence model development, and, of course, all the things that have been shipped in Amp.
Thorsten and Quinn talk about the future of programming and whether code will still be as valuable in the future, how maybe the GitHub contribution graph is already worthless, how LLMs can free us from the tyranny of input boxes, and how conversations with an agent might be a better record of how a code change came to be than git commit tools. They also share where it works and simply doesn't work.
Quinn and Thorsten start by sharing how reviews are still very much needed when using AI to code and how it changes the overall flow you're in when coding with an agent. They also talk about a very important question they face: how important is code search, in its current form, in the age of AI agents?
Thorsten and Quinn talk about how different agentic programming is from normal programming and how the mindset has to adapt to it. One thing they discuss is that having a higher-level architectural understanding is still very important, so that the agent can fill in the blanks. They also talk about how, surprisingly, the models are really, really good when they have inputs that a human would normally get. Most importantly, they share the realization that subscription-based pricing might make bad agentic products.
In the first episode of Raising an Agent, Quinn and Thorsten kick things off by sharing a lot of wow-moments they experienced after getting the agent at the heart of Amp into a working state. They talk about how little is actually needed to create an agent and how magical the results are when you give a model the right tools, unlimited tokens, and feedback. That might be the biggest surprise: how many previous assumptions feel outdated when you see an agent explore a codebase on its own.
Select an episode to listen