Inspiration
Our project is inspired by ongoing research in contactless respiratory and heart rate monitoring** (remote photoplethysmography, or rPPG) conducted in the Mobile, Pervasive, and Sensor Computing (MPSC) Lab at UMBC as well as Automatic Infant Respiratory Estimation from Video, rPPG.
One of our team members, Haley, explored this topic during the NSF REU Program (Summer 2025) under the mentorship of Dr. Zahid Hasan and leadership of Dr. Nirmalya Roy, focusing on how camera distance variation affects the accuracy of video-based respiratory rate estimation.
In many healthcare settings such as ICUs (Intensive Care Units), NICUs (Neonatal Intensive Care Units), and elderly care facilities, cardiopulmonary monitoring often relies on chest straps, adhesive electrodes, nasal cannulas, or wired sensors.
While effective, these methods can be uncomfortable, restrictive, and even harmful over time—causing skin irritation, infection risk, or sleep disruption, especially among vulnerable populations.
Similarly, people undergoing sleep studies or living with sleep apnea often struggle with bulky or intrusive monitoring equipment.
To address these limitations, we aim to create a comfortable, non-invasive, and contactless alternative using computer vision and signal processing.
Our goal is to continuously track respiratory and heart rates from video data in real time, offering a more natural and stress-free monitoring experience.
With rapid advances in AI, low-cost imaging, and medical signal processing, we believe this technology can redefine remote health monitoring—bridging the gap between clinical precision and real-world comfort.
What it does
- Vital Signs Monitoring: Measures breathing rate (BR) via chest motion and heart rate (HR) via rPPG, using adapted open-source research code from GitHub.
- Sleep Safety: Detects sleep positions (back, side, stomach) and alerts for SIDS risks like face-down sleeping.
- Visual AI Analysis: Recognizes sleep, crying, or risky movement (e.g., climbing out of crib).
- Contextual Intelligence: Learns baby’s sleep patterns and explains insights in natural language.
- Automated Reports: Generates daily summaries of vitals, positions, and safety alerts.
How We Built It
Halo was built by integrating open-source research and real-world AI systems to create a contactless, intelligent health monitor.
We combined the Automatic Infant Respiratory Estimation from Video method with Dr. Hasan’s (UMBC) Respiratory Rate (RR) code and integrated rPPG Toolbox for Heart Rate (HR) detection.
Core Pipeline
Camera Feed → MediaPipe Pose Detection → Signal Extraction (RR, HR) → Filtering & Analysis → Alerts & Reports
- Breathing Rate: Optical flow and color-channel analysis on chest/abdomen regions
- Heart Rate: rPPG-based detection using facial skin tone variations (green channel)
- Filtering: Butterworth bandpass filters to isolate physiological frequencies
- Age-Adaptive Ranges: Automatically adjusts HR/RR thresholds for infants, children, and adults
- Flagging: Detects 3+ abnormal readings, then triggers a voice alert via LiveKit
⚙️ Tech Stack
| Category | Tools |
|---|---|
| Computer Vision | MediaPipe, OpenCV |
| Signal Processing | NumPy, SciPy |
| Visualization | Matplotlib, OpenCV GUI |
| Cloud & Storage | Snowflake (time-series vital sign data) |
| AI Agent | LiveKit Agents, GPT-4.1-mini, Cartesia Sonic 2 (TTS), AssemblyAI (STT), Silero (VAD) |
| Deployment | Docker, Virtual Environment (venv) |
Key Algorithms
- RR Detection: Tracks RGB fluctuations in chest/abdomen ROI (0.1–1.0 Hz)
- HR Detection: Extracts facial pulse signal via rPPG (0.7–4.0 Hz)
- Abnormality Detection: Pattern-based alerts from Snowflake logs
- Voice Agent: Real-time clinical alerts and summaries
Unique Features
- 100% Contactless — no sensors or wearables
- Age-Adaptive monitoring
- Research-Validated using GitHub-based open-source respiratory & rPPG codes
- AI Voice Alerts for caregivers
- Cloud-Integrated (Snowflake) for time-series tracking
- Live Visualization with confidence scores
Halo demonstrates how computer vision, signal processing, and conversational AI can come together to enable safe, real-time, and ethical healthcare monitoring — powered entirely by a standard webcam.
Challenges We Ran Into
Setting up and integrating multiple AI models was a patience-testing process — especially with slower internet speeds that made downloading large libraries and repositories time-consuming. We also encountered version mismatches and dependency issues while configuring our virtual environment, which forced us to dive deeper into Python environments, Docker, and system setup.
Despite the setbacks, these challenges helped us strengthen our debugging skills, learn new development workflows, and gain a deeper understanding of AI pipeline integration and open-source model deployment.
Accomplishments That We're Proud Of
- Built a working prototype that can estimate breathing patterns and heart rate in real-time using only a webcam — completely contactless and non-invasive.
- Integrated open-source AI models for respiratory and cardiac signal extraction, adapting research-grade codebases into a lightweight, hackathon-ready system.
- Successfully set up a full virtual environment and managed all dependencies despite limited time and internet speed challenges.
- Bridged research and real-world impact — transforming prior academic work from the NSF REU program into a functional demo with potential use in healthcare and home monitoring.
- Learned how to integrate multiple AI systems (like MediaPipe, Gemini, and Letta AI) to detect posture, understand context, and interpret physiological signals visually.
- Collaborated effectively under pressure, learning new tools, debugging errors, and iterating fast — while keeping our focus on the goal of improving comfort and care for vulnerable populations.
What We Learned
- How to combine computer vision and physiological signal processing to measure breathing and heart rate without physical sensors.
- Gained experience in setting up virtual environments with GPU and Docker support to run advanced AI models efficiently.
- Learned to debug and optimize real-time pipelines, integrating multiple libraries and frameworks under tight time constraints.
- Explored how context-aware AI systems like Gemini and Letta can enhance interpretation and communication of health insights.
- Discovered the importance of balancing innovation with safety and ethics, especially when building AI for sensitive applications like infant monitoring.
- Sharpened our teamwork, problem-solving, and adaptability — learning to pivot quickly and make progress even when facing technical hurdles.
What's Next for Halo
- Improve accuracy and robustness of respiratory and heart rate detection using better datasets and advanced filtering techniques.
- Validate performance across different lighting conditions, camera qualities, and subject distances to ensure real-world reliability.
- Enhance contextual intelligence — enabling Halo to recognize deeper behavioral cues like restlessness, discomfort, or irregular breathing patterns.
- Develop a mobile-friendly version with real-time alerts and parental dashboards.
- Expand privacy and edge computing capabilities, ensuring all processing remains secure and local.
- Collaborate with pediatric and healthcare researchers to evaluate Halo’s clinical and caregiving potential.
- Ultimately, evolve Halo into a smart, ethical, and accessible AI companion for monitoring well-being — from NICUs to elderly care.

Log in or sign up for Devpost to join the conversation.