Inspiration
The global radiologist shortage is reaching crisis levels - with over 42,000 radiologists needed by 2033 and patients waiting hours or days for critical chest X-ray diagnoses. We were inspired by the potential to use AI not to replace radiologists, but to extend upon their expertise and ensure no patient waits for a life-saving diagnosis. Emergency departments need instant triage for conditions like pneumothorax, while rural hospitals lack access to in-person specialist expertise entirely.
What it does
XightMD is an AI-powered chest X-ray analysis platform that provides instant triage and structured radiology reports in under 30 seconds. Our multi-agent system coordinates four specialized AI agents:
- Triage Agent: Analyzes X-rays for 14 lung conditions, assigns urgency scores (1-5), and identifies critical findings
- Report Agent: Generates structured radiology reports following professional medical standards (Indication, Comparison, Findings, Impression)
- QA Agent: Validates analysis consistency and flags cases requiring manual review
- Coordinator Agent: Orchestrates the entire pipeline and manages workflow
Intended functionaliy=ty
The system provides confidence scores, priority levels, and detailed medical findings while maintaining HIPAA-compliant de-identification processes.
How we built it
Frontend: Next.js 14 with TypeScript and Tailwind CSS for a responsive medical interface Backend: FastAPI server bridging frontend requests to the agent network AI Framework: Fetch.ai's uAgents for multi-agent coordination with Claude 4 for multimodal analysis ML Pipeline: Custom lung disease classifier trained on medical datasets Data: Trained done using the NIH Chest X-ray dataset (100,000+ images) and reports structured using ReXGradient-160K formats
Architecture Flow:
Frontend → FastAPI → Coordinator Agent → [Triage + Report + QA Agents] → Results
Challenges we ran into
Deployment Nightmares: Multiple deployment failures across different platforms, with agent network connectivity issues preventing final deployment despite working locally.
Model Architecture Chaos: Experienced difficulties training on the data using various architectures due to poor understanding of the effect that class imbalance has on training multi-class classifiers.
Agent Integration Hell: Getting four separate uAgents to communicate reliably was far more complex than expected. Message passing, state management, and coordination between agents broke multiple times, especially under load.
Last-Minute Breaks: With hours left before submission, our agent network mysteriously stopped communicating properly, forcing us to implement fallback mock responses to demonstrate the UI.
Medical Data Complexity: Real medical datasets are messy - inconsistent formats, missing labels, and strict privacy requirements made training significantly harder than standard ML projects.
Time Crunch: Ambitious multi-agent architecture proved too complex for hackathon timeframe - we underestimated the coordination complexity between Claude API, uAgents, and medical data processing.
Accomplishments that we're proud of
- Built a Working Medical AI Pipeline: Despite challenges, created a functional chest X-ray analysis system that produces medically accurate reports
- Multi-Agent Architecture: Successfully implemented complex agent coordination using Fetch.ai's uAgents framework with specialized roles
- Professional Medical Interface: Created a polished healthcare-grade UI that medical professionals could actually use
- Real Dataset Integration: Trained models on legitimate medical datasets (NIH, ReXGradient-160K) rather than toy examples
- Technical Innovation: Combined computer vision, natural language processing, and multi-agent systems in a novel healthcare application.
What we learned
Hackathon Projects ≠ Ready Medical Systems: Medical AI is significantly more complex than typical hackathon projects due to regulatory, accuracy, and safety requirements.
Multi-Agent Systems Are Hard: Coordinating multiple AI agents reliably requires robust error handling, state management, and fallback mechanisms we didn't initially account for.
Deployment is Critical: The most impressive local demo means nothing if you can't deploy it reliably - should have prioritized deployment infrastructure earlier.
Medical Data is Unique: Healthcare datasets require specialized preprocessing, privacy handling, and domain expertise that differs drastically from standard ML workflows.
Scope Creep Kills: Our ambitious vision of multiple agents, custom ML models, and production-ready features was too much for 24 hours - simpler MVP would have been more successful.
Integration Testing Matters: Individual components worked perfectly, but integration between agents, APIs, and frontend broke in unexpected ways under pressure.
What's next for XightMD
Immediate Fixes: Resolve deployment issues and stabilize agent communication for reliable demo deployment.
Model Optimization: Improve lung disease detection accuracy through better handling of class imbalances and more compute time/resources.
Clinical Validation: Partner with radiologists to validate our reports against real clinical cases and refine medical accuracy.
Expansion: Extend beyond chest X-rays to other imaging modalities (CT, MRI) and anatomical regions.
Real-World Pilot: Deploy pilot programs in emergency departments and rural hospitals to demonstrate real clinical impact.
Agent Improvements: Enhance multi-agent coordination, add specialized agents for specific conditions, and improve quality assurance algorithms.
XightMD represents the future of AI-assisted radiology - not replacing doctors, but empowering them to save more lives, reliably and faster.
Built With
- anthropic
- fastapi
- fetch
- nextjs
Log in or sign up for Devpost to join the conversation.