Every year, millions (around 2.4 million) newborns die within their first month of life due to preventable conditions like jaundice, asphyxia, and respiratory distress—often because early symptoms go unnoticed. In many parts of the world, tools like pulse oximeters or bilirubin scanners are unavailable. We wanted to build something simple yet powerful, that could turn an ordinary phone into a life-saving early detection system, giving caregivers and nurses the ability to act within the golden minute.

SEOS (Smart Early Observation System) uses AI-powered color, sound, and vital sign analysis to screen newborns for early danger signs.

Detects skin color anomalies (yellow, blue, pale) indicating jaundice or poor oxygenation.

Analyzes cry or breathing audio to identify distress patterns.

Integrates a short questionnaire to evaluate temperature, feeding, and breathing effort. SEOS then generates an instant risk report—low, moderate, or high—and provides clear next-step guidance, empowering healthcare workers and parents to take timely action.

We developed SEOS using Streamlit, OpenCV, scikit-learn, and librosa for real-time image and audio processing.

Computer Vision (CV): Skin-weighted K-means clustering in CIELAB color space to quantify jaundice, cyanosis, and pallor levels.

Audio AI: Extracted features like RMS, zero-crossing rate, and spectral centroid to detect abnormal cries.

Machine Learning: Added a few-shot training module allowing users to train custom models with small datasets.

Fusion Engine: Combined image, audio, and questionnaire inputs into a unified risk score with actionable recommendations.

Challenges we ran into

Ensuring accurate color detection under varying lighting and camera conditions.

Designing a lightweight AI pipeline that runs smoothly on low-end devices without cloud dependency.

Balancing usability and clinical depth—making it simple enough for rural workers yet robust enough for medical relevance.

Managing data variability in audio samples and building reliable heuristics with limited neonatal datasets.

What we're proud of

Built a fully functional multimodal AI prototype in under 40 hours.

Integrated image, audio, and questionnaire analysis into a single, intuitive dashboard.

Developed a color-detection algorithm that interprets skin hues as medical indicators in real time.

Created a tool that can empower caregivers in low-resource settings and potentially save lives.

What we learned

How computer vision and audio AI can complement each other to assess health non-invasively.

The importance of ethical AI design when working on medical technologies.

How to translate complex medical indicators into simple, user-friendly outputs that anyone can understand.

That innovation doesn’t always need expensive hardware—accessibility matters more.

What's next for SEOS

Integrate deep learning models for more accurate color and cry classification.

Partner with medical researchers and hospitals for validation and field testing.

Develop an offline mobile version for community health workers in rural areas.

Add a cloud-based neonatal dashboard for aggregated screening and follow-up.

Expand SEOS into a broader infant health monitoring platform—bridging technology and care for every newborn.

Built With

Share this project:

Updates