Inspiration

We've all been there: spending 30 minutes in a meeting just to explain why we made a decision three weeks ago. Most meetings exist not to make new decisions, but to transfer context that should already be accessible. Person A does work, their reasoning lives only in their head, Person B needs context, a meeting gets scheduled, and valuable time is wasted on something that could have been asynchronous.

We asked: what if your teammates could query your reasoning anytime, without interrupting you?

What it does

Cognitive State Protocol is an AI-powered knowledge management system that automatically captures, structures, and makes searchable the reasoning and decisions made by team members during their work.

Core features:

  • Audio/Video/Document Capture - Record coding sessions, meetings, or design discussions with automatic git context linking
  • Automatic Decision Extraction - AI extracts decisions, alternatives considered, confidence levels, and open questions from captured media
  • Personal Cognitive Bots - Each user has an AI "version" of themselves that answers questions about their decisions with source citations
  • Cross-Team Querying - Ask a teammate's bot "Why did Sarah choose Postgres?" and get her reasoning without scheduling a meeting
  • Proactive Conflict Detection - System identifies contradictions between team members before they become problems
  • Bot-to-Bot Meetings - AI agents representing team members can conduct async discussions and synthesize perspectives

How we built it

  • Backend: Python/FastAPI with async support, PostgreSQL + pgvector for semantic search, SQLAlchemy 2.0, Redis for caching, and Google Gemini 3 (Pro for heavy processing, Flash for quick queries)

  • Frontend: Next.js 15 with React 19, Radix UI for accessibility, Tailwind CSS, Zustand for state management, Framer Motion for animations, and XYFlow for knowledge graph visualization

  • AI Pipeline: Gemini text-embedding-004 for semantic embeddings, structured extraction prompts for decision parsing, and confidence scoring from speech patterns

Challenges we ran into

  • Semantic search at scale - Implementing efficient vector similarity search required careful pgvector configuration and embedding optimization
  • Confidence extraction - Teaching the AI to reliably score how confident someone sounds in their reasoning from natural speech patterns
  • Privacy controls - Building fine-grained visibility settings (private/team/public/specific users) without compromising query performance
  • Real-time processing feedback - Showing users extraction progress without blocking the UI during long audio/video processing

Accomplishments that we're proud of

  • Full-stack decision extraction - From raw audio to queryable decisions with source timestamps in a single pipeline
  • Bot-to-bot meetings - AI agents can conduct discussions between team perspectives without anyone being present
  • Proactive conflict detection - System identifies misaligned decisions before they cause integration issues
  • Source attribution - Every answer includes exact timestamps to the original media, building trust in AI-generated responses

What we learned

  • Context is knowledge that exists only in people's heads until you build systems to capture it
  • Semantic search dramatically outperforms keyword matching for finding relevant reasoning
  • The best meeting is the one that doesn't need to happen

What's next for Cognitive State

  • IDE integration - Capture decisions directly from inline code comments and commit messages
  • Calendar sync - Automatically identify and reduce redundant context-transfer meetings
  • Decision dependency graphs - Visualize how decisions cascade across team members

Built With

Share this project:

Updates