🧑⚖️ Hackathon Judging Platform – Judgeflow
An AI-powered platform that automates eligibility checks, cheat detection, scanning exposed API keys, judge assignment based on expertise, and scoring—effectively centralizing and streamlining the entire judging process exactly as MLH recommends. Powered by Rust, React, Supabase, and Perplexity.
🧠 Inspiration
We built Judgeflow to solve one of the biggest pain points in hackathons: judging. Manual eligibility checks, integrity issues, and judge coordination make it a stressful, time-consuming process. Inspired by our own experience organizing and competing, we envisioned an AI-powered system that could bring trust, speed, and structure to hackathon judging.
🧑⚖️ What it does
Judgeflow is an end-to-end AI-powered judging platform that:
- ✅ Automates eligibility and integrity checks using Devpost, GitHub, and LinkedIn analysis
- 🔍 Detects exposed API keys and flags potential plagiarism
- 🧠 Smartly assigns judges based on expertise and availability
- 📝 Auto-generates judge briefs from project READMEs and demo video transcripts
- 📊 Centralizes scoring into a real-time dashboard
It replaces spreadsheets, last-minute interviews, and manual note-taking with intelligent workflows and data enrichment.
🏗️ How we built it
Backend:
- Language: Rust
- Libraries:
serde,csv,polars,sqlx,reqwest - Stack: Supabase (Postgres), Docker
- Language: Rust
Frontend:
- Framework: React + TypeScript
- Tooling: Vite, SWC, React Router DOM
- Auth: Clerk
- Notifications: Sonner
- Deployment: Vercel
- Framework: React + TypeScript
Data Enrichment:
- Perplexity AI for semantic parsing of READMEs and demo videos
- Devpost + GitHub/LinkedIn analysis for eligibility
- CSV and Polars for high-performance project data processing
- Perplexity AI for semantic parsing of READMEs and demo videos
🧱 Challenges we ran into
- Scraping data reliably from Devpost’s unofficial APIs
- Analyzing unstructured README and video transcript content
- Dynamically mapping judges to projects without conflicts of interest
- Handling real-time scoring inputs efficiently
🏆 Accomplishments that we're proud of
- Deployed a working MVP that integrates real AI tools
- Built judge assignment logic that mimics real-world judge expertise mapping
- Automated brief generation to improve judge efficiency
- Created a full-stack platform ready for live event use
📚 What we learned
- AI-powered tooling can dramatically streamline real-world workflows
- Supabase + Clerk offers fast, secure user and data handling
- Rust + Polars are a powerful combo for performance-intensive data ops
- Seamless UX makes or breaks adoption in live judging scenarios
🔮 What’s next for Judgeflow
- 🎯 Launch a white-label version for Devpost, MLH, and university organizers
- 💬 Discord integration for chat-driven project commenting
- 🧠 Improve AI scoring and plagiarism detection with model fine-tuning
- 🚀 Expand into pitch competitions, coding olympiads, and accelerator demo days
✨ Authors and Contributors
Jon Marien – 🧑🏻💻🔧 + 🛠️🗄️ + 🖌️🎨 GitHub
Yubo Sun – 🧑🏻💻🛠️🗄️ + 🔧 GitHub
Aleks Bursac – 🧑🏻💻🖌️🎨 + 🖥️🌐 GitHub
Sebastian Vega – 🧑🏻💻🖌️🎨 + 🛠️🗄️ GitHub
🙏 Acknowledgments
Perplexity AI – semantic analysis APIs
Clerk.dev – authentication
Supabase – database
Docker – deployment
Rust open-source ecosystem
💡 Built with ❤️ to make hackathons smarter, fairer, and faster than ever.
Log in or sign up for Devpost to join the conversation.