Reverse Vibez
We built a tool that lets you queue up a bunch of side-project ideas for AI to work on simultaneously. You have a quick interview conversation with the AI, it figures out what you actually want, and then spins up workers in isolated sandboxes. The whole point is you can kick this off before bed and wake up to actual working prototypes.
How ?
The core idea of Reverse Vibez is reverse engineering codebases that you like from github.
What it does ?
You know when you see a GitHub repo and think "damn, I wish I understood how they did that drag-and-drop thing" or "their auth flow is so clean"? That's what this is for. You paste the URL, and instead of the AI just doing stuff it actually talks to you first. "What part of this do you care about? What are you trying to build?" Then it breaks that down into a queue of extraction tasks, each one pulling out a specific pattern into a minimal Next.js example you can actually run.
How we built it
Next.js 14 with app router, Drizzle ORM with Turso for the database. Three separate Claude agents - one analyzes the source repo, one runs the interview, one does the extraction. We built a job queue so you can fire off multiple extractions and not have to babysit them.
Challenges we ran into
So our first assumption was completely backwards. We thought the hard part would be getting AI to extract code cleanly. Nope. The hard part was figuring out what someone even wants in the first place. We burned the first few hours on a "paste URL, get code" flow and the outputs were garbage. Like, technically it worked, but nobody would actually use what came out. Adding the interview step felt like admitting defeat at first - like we were making users do more work. Turns out that's the thing that made any of this usable. The other thing that kept biting us: AI loves to drag in half the original codebase. We had to get weirdly specific in the prompts. "Mock all external dependencies. Keep ONLY the core pattern logic. Do not include the database. Do not include auth unless that IS the pattern." Clear instructions beat vague hope, every single time. Why three agents instead of one We almost did one mega-agent that does everything. Glad we didn't. Breaking it up meant each prompt could be way more focused. The analyzer doesn't need to know how to extract code - it just needs to understand the repo structure and pass that context along. Separation of concerns, but for AI systems.
Accomplishments that we're proud of
The queue works. Like, actually works. You can fire off 5 extraction jobs and go get coffee. That felt ambitious when we started, but it's what makes this actually useful instead of a demo. And honestly, the interview phase surprised us. It sounds like such a simple addition but it completely changed the quality of outputs.
What we learned
Honestly, we assumed the hardest part would be the AI doing the extraction. It wasn’t. The real challenge was everything that comes before that understanding what someone actually wants to pull out in the first place. We spent the first few hours trying to build a “paste a URL, get clean code” flow, and the results were rough. The outputs just weren’t usable. Adding an interview step felt like we were conceding something, but it turned out to be the exact thing that made the tool work. One big takeaway: AI is remarkably good at isolating code patterns but only if you’re painfully explicit. Early on, it kept dragging in half the original codebase. Things only clicked once we spelled it out clearly: mock all external dependencies, keep only the core pattern logic. Clear instructions beat vague hope every time. Agent design became its own deep rabbit hole. We ended up with three agents: one to analyze the repository structure, one to talk to you and clarify what you actually want, and one to do the extraction itself. We briefly considered rolling everything into a single “do-it-all” agent, but breaking it apart made each prompt far more focused. The analyzer doesn’t need to know how to extract code it just needs to understand the codebase and produce clean context. It’s separation of concerns, but applied to AI systems.
What's next for Reverse Vibez
We want to scope down to UI-specific extractions first - like if you just want that one animation or that specific component interaction, you shouldn't have to think about the rest of the codebase at all. Video uploads would be huge. Imagine recording your screen like "see this? I want this exact thing" and the AI figures out what you're pointing at. Way more natural than trying to describe it in text. Same with voice we want to just talk through what we're building instead of typing everything out. The interview phase would feel so much better as an actual conversation. And then there's this whole other direction we keep coming back to. Right now it's code repos, but the same idea works for other stuff. Like, some company runs a beautiful ad campaign and you're sitting there wondering "what made this work?" - being able to paste that in and extract the actual strategy, the structure, what they did differently. We don't know exactly what that looks like yet but it feels like there's something there. If this works for code patterns, why not creative patterns? Marketing patterns? We'll see where it goes.
Log in or sign up for Devpost to join the conversation.