The Inspiration
In a world flooded with AI that provides instant answers, we noticed a critical gap. The focus on getting answers quickly is creating a dependency that can stifle deep, original thought. The hardest problems don't need a faster answer; they need a better question. Furthermore, the most important brainstorming and self-reflection requires absolute privacy, something cloud-based AIs can never guarantee. We were inspired to build RefactorAi for the OpenAI Open Model Hackathon as an antidote to this—a tool designed not to think for you, but to help you think better.
What it does
RefactorAi is a sophisticated, native desktop application built with PyQt5 that provides a conversational interface to a local AI model. It is not a simple chatbot. The user selects from one of five powerful, built-in thinking frameworks:
The 5 Whys: To find the root cause of any problem.
First Principles Thinking: To deconstruct challenges to their fundamental truths.
The Socratic Method: To rigorously examine and challenge one's own beliefs.
Systems Thinking: To map and understand complex interconnections.
Devil's Advocate: To stress-test ideas and uncover hidden weaknesses.
Plus, a Custom Framework editor: This allows power users to write and save their own system prompts, creating unique thinking tools tailored to their specific needs.
The AI then adopts the persona of the selected framework, guiding the user through their problem with insightful questions instead of solutions.
Key Features:
5+ Specialized Thinking Frameworks: Go beyond simple chat with dedicated AI personas for The 5 Whys, First Principles Thinking, The Socratic Method, Systems Thinking, and Devil's Advocate.
Custom Framework Editor: Extend the application's capabilities by creating, saving, and using your own unique system prompts as new thinking tools.
Fully Native Desktop Experience: A custom-built PyQt5 application with a modern, professional dark theme, animations, and a responsive layout that works entirely offline.
Persistent Memory & Thinking Journal: A robust SQLite database backend saves all sessions and conversations automatically. A built-in "Thinking Journal" allows you to create, edit, and revisit standalone notes and ideas.
Flexible Model Selection: The app intelligently detects all your local Ollama models and allows you to switch between them on the fly, empowering you to balance speed and performance based on your hardware.
Advanced Input: Voice & File Analysis: Interact with the AI hands-free using built-in voice-to-text. You can also analyze local files, including performing OCR on images to extract text and bring it into your thinking session.
100% Offline and Private: All AI processing, conversation history, and data storage happen exclusively on your local machine. Nothing is ever sent to the cloud.
How we built it
RefactorAi is a multi-threaded Python application with a clear separation between its UI and core logic.
Frontend/UI: We built a custom native user interface from scratch using PyQt5, focusing on a professional, responsive, and modern user experience.
Backend & AI: The core logic is powered by Python. We use the Ollama library to serve and interact with a variety of local, open-source AI models. This gives users the flexibility to choose a model that best matches their machine's power—from OpenAI's powerful gpt-oss for high-end systems, to highly efficient models like Microsoft's Phi-3-mini or Meta's Llama-3-8B for an excellent balance of speed and performance on any device.
Database: A local SQLite database handles all data persistence for sessions, conversations, and the Thinking Journal.
Advanced Features: We integrated libraries like speech_recognition, pyttsx3 for voice capabilities, and pytesseract with OpenCV for the file analysis and OCR features.
Modularity: The core logic is organized into a src/ directory containing modules for managing mental models, sessions, and exports.
Challenges we ran into
Our biggest challenge was building a complex, multi-threaded native desktop application in such a short timeframe. Ensuring that the AI processing, which can be intensive, didn't freeze the user interface required a deep dive into Qt's threading and signal system.
Another major challenge was AI model performance. We were excited to build with OpenAI's powerful gpt-oss model, but quickly discovered that its resource requirements led to a slow, lagging user experience on standard hardware. This forced us to confront the classic trade-off between model power and application responsiveness. Instead of just picking a single smaller model, we built a user-facing solution: a model selector. This led us to integrate faster, highly-efficient models like Microsoft's Phi-3-mini and Llama-3-8B, turning a performance challenge into a key feature that allows users to choose the perfect model for their own machine.
Accomplishments that we're proud of
We are incredibly proud of building a fully-featured native desktop application from scratch in such a short time, focusing on a polished and professional user experience.
The "Thinking Journal": We successfully designed and implemented a robust SQLite backend that provides the application with persistent memory, allowing users to save, revisit, and search their entire history of thoughts and breakthroughs.
Engineering AI Personas: We are very proud of the detailed prompt engineering that went into creating five distinct and effective AI "personalities." Making the AI a patient Socratic partner one moment and a sharp Devil's Advocate the next was a core achievement.
Solving for Performance: Instead of being blocked by the high resource needs of large models, we turned a performance challenge into a user-centric feature. Building the flexible model selector to support everything from the powerful gpt-oss to the efficient phi3:mini is an accomplishment we're particularly proud of.
Advanced Feature Integration: Successfully integrating complex features like real-time voice-to-text and file OCR into a stable, multi-threaded application was a significant technical achievement.
What we learned
This hackathon was a deep dive into the real-world challenges of building production-ready, local AI applications.
Prompt Engineering is an Art: We learned that crafting a successful AI persona is about more than just giving instructions; it's about setting firm constraints and providing clear examples. The difference between a generic chatbot and a specialized thinking partner lies entirely in the quality of the system prompt.
The User Experience is Paramount: The biggest lesson was the trade-off between AI model power and user experience. A powerful model is useless if it's too slow to interact with. This taught us the importance of choosing the right model for the right hardware and, ultimately, led us to build the model selector to empower the user.
Native App Development is Rewarding: Building a multi-threaded native desktop application with PyQt5 was a significant challenge. We learned a great deal about managing background processes to keep the UI responsive, handling signals and slots for communication, and designing a complex, custom interface from the ground up.
What's next for RefactorAI
We believe RefactorAi is more than just a tool; it's the foundation for a new category of personal intelligence software. While this hackathon version is a powerful proof-of-concept, we have a clear vision for its future:
Team & Enterprise Collaboration: Evolving the platform to allow teams to join thinking sessions with an AI acting as an impartial facilitator, helping to deconstruct business challenges, run project post-mortems, and brainstorm new strategies in a structured, unbiased way.
Deeper Integrations: Building a robust API and plugins to connect RefactorAi directly with knowledge bases like Notion and Obsidian. This would allow users to select their existing notes and use the Catalyst to find hidden connections and refactor their existing knowledge.
The Proactive "Catalyst Agent": Expanding on our file analysis feature, we envision an agentic version that can be pointed at a codebase, a business plan, or a scientific paper. The agent would analyze the content and then proactively initiate a thinking session with the user, asking targeted questions about potential weaknesses, unexplored opportunities, or logical inconsistencies it discovered.
A Framework Marketplace: Building on the custom framework feature, we plan to create a platform where users can create, share, and download new thinking models, fostering a community dedicated to the art of better thinking.


Log in or sign up for Devpost to join the conversation.