Inspiration

Pediatric hospital stays can be isolating and frightening for young patients, especially at night when families can't be present. Children often hesitate to bother nurses with their needs or can't articulate their discomfort effectively. We were inspired to create BMO Care, an AI companion that bridges this gap, providing 24/7 emotional support while giving clinical staff real-time insights into patient well-being.

What it does

BMO Care is an intelligent pediatric hospital companion system that combines conversational AI, computer vision, and IoT health monitoring to support hospitalized children and their care teams.

For Young Patients:

Buddy - A warm, kid-friendly AI voice companion that children can talk to anytime, reducing loneliness and anxiety Detects pain reports through natural conversation, systematically asking for location and severity (1-5 scale) Provides emotional support and comfort without giving medical advice Responds to detected emotions using real-time facial analysis For Clinical Staff:

Real-time nurse dashboard showing patient vitals, alerts, and AI conversation logs Automated alerts for high-priority events (elevated heart rate >140 BPM, prolonged restlessness, concerning speech patterns) Direct phone calls via Twilio for urgent situations (pain level ≥3) Medicine reminder system (4-hour intervals) MongoDB event logging for comprehensive patient monitoring For Families:

Mobile app and web portal to check on their child remotely Real-time vital signs from Apple Watch integration View conversation history and emotional state updates

How we built it

AI Voice Pipeline:

ElevenLabs for natural speech-to-text and text-to-speech with kid-friendly voices Google Gemini for fast, context-aware conversational responses Intelligent pain detection using NLP pattern matching for body parts and severity Computer Vision:

MediaPipe face landmark detection for emotion classification Real-time visual state monitoring (calm/restless/eyes closed) Motion detection using OpenCV frame differencing Sustained emotional state tracking with configurable alert thresholds IoT Health Monitoring:

Bluetooth Low Energy integration with Apple Watch for continuous heart rate monitoring Real-time vital streaming to dashboard via Socket.io Configurable thresholds for abnormal readings Backend Infrastructure:

Node.js/Express server with Socket.io for real-time bidirectional communication MongoDB Atlas for scalable event/conversation storage RESTful API for patient data, alerts, and family messaging Expo push notifications for mobile alerts Frontend:

React-based nurse dashboard with live patient monitoring React Native mobile app for family members WebSocket-powered real-time updates Alert Systems:

Telegram bot for instant nurse notifications with rich formatting Twilio integration for automated phone calls during urgent situations Intelligent alert cooldown system (60s) to prevent spam

Challenges we ran into

Natural conversation flow with pain detection - Balancing a friendly chatbot that doesn't feel like an interrogation while systematically gathering clinical data (pain location → severity → recommendation)

Real-time emotion detection accuracy - MediaPipe blendshapes required careful threshold tuning to avoid false positives (e.g., distinguishing genuine sadness from resting face)

Cross-platform hardware integration - Apple Watch BLE scanning and pairing proved finicky; required Heartcast app and careful UUID handling for reliable heart rate streaming

Multi-source alert coordination - Preventing alert fatigue by implementing cooldowns while ensuring urgent events (pain, concerning speech) always get through

Maintaining child-appropriate AI responses - Extensive prompt engineering to keep Buddy warm and reassuring while never giving medical advice or revealing its AI nature

Accomplishments that we're proud of

Fully functional end-to-end system with AI voice, computer vision, IoT sensors, real-time dashboards, and mobile alerts

Child-centered UX design - Buddy's conversational style is genuinely kid-friendly (tested responses for ages 4-12)

Intelligent pain tracking pipeline - Automatically detects pain mentions → asks location → asks severity → generates care recommendations via LLM → alerts nurse → offers to call

Production-ready architecture - MongoDB Atlas cloud database, Socket.io for scalability, background task processing, error handling

Seamless Apple Watch integration - Continuous heart rate streaming with automatic reconnection and data validation

Sophisticated emotion detection - 52 facial blendshapes analyzed in real-time to classify happiness, sadness, surprise, and distress

Multi-platform family access - Web portal + React Native mobile app for remote monitoring

What we learned

Pediatric UX is fundamentally different - Word choice, response length, and tone require extensive consideration for young users Healthcare AI needs guardrails - Explicit system prompts to prevent medical advice, diagnosis, or revealing AI nature Real-time systems need resilience - Implemented reconnection logic, cooldowns, validation, and graceful degradation Computer vision thresholds are context-dependent - Hospital lighting, patient positioning, and camera angles affect emotion detection accuracy Integration complexity compounds fast - Coordinating Python (AI/sensors), Node.js (server), React (frontend), MongoDB, Socket.io, BLE, and cloud APIs taught us valuable architecture lessons

What's next for BMO Care

Multi-language support - Spanish, Mandarin, and other languages for diverse patient populations

Interactive games and stories - Voice-activated entertainment to further reduce boredom and anxiety

Predictive analytics - ML models to predict adverse events before they occur based on vital trends

EHR integration - Connect with Epic/Cerner for medication schedules, diagnosis context, and clinical workflows

HIPAA compliance hardening - End-to-end encryption, audit logging, and formal security review

Physician insights dashboard - Aggregate data across patients for clinical research and quality improvement

Multimodal AI - Vision-language models (GPT-4V) to detect visible signs of distress like grimacing or wound issues

Built With

Google Gemini for conversational AI elevenlabs - Natural TTS/STT mediapipe - Face landmark detection opencv - Computer vision Backend:

node.js - Server runtime express - Web framework socket.io - Real-time bidirectional communication mongoose - MongoDB ODM vultr - hosting MongoDB Atlas Cluster pymongo - Python MongoDB driver IoT & Hardware:

bleak - Bluetooth Low Energy (Python) Apple Watch + Heartcast app pyaudio + sounddevice - Audio I/O Communication:

twilio - Automated phone calls Telegram Bot API - Instant nurse alerts expo-server-sdk - Push notifications Frontend:

react - Web dashboards react-native - Family mobile app vite - Build tool Infrastructure:

MongoDB Atlas - Cloud database Python 3.13 - AI pipeline WebSockets - Real-time data streaming

Built With

Share this project:

Updates