With the rise of remote work and digital lifestyles, long hours in front of screens have become the norm. Many people (ourselves included) unknowingly adopt poor sitting posture and forget to take screen breaks — both of which can lead to physical and mental fatigue over time. I wanted to build a simple, accessible, and smart solution that encourages healthier screen habits — without needing expensive hardware or wearables.
This project was inspired by ergonomic tools in office spaces, mindfulness and wellness reminders in productivity apps, and the idea that posture and presence can be monitored using just a webcam and AI.
How We Built It -- Technologies Used: p5.js – For real-time webcam input and canvas rendering, ml5.js + PoseNet – To detect eye and shoulder positions with ease using machine learning, Vanilla JS/HTML/CSS – For interface logic, customization, and deployment, and Web Speech API – To enable voice response for emergency prompts.
Key Features: Bad Posture Detection – Monitors eye and shoulder alignment to detect slouching or leaning and alerts the user with screen blur and sound; Staring Alert – Alert if the user has been staring at the screen too long without moving; Fall Detection – If no person is detected on the screen for more than 30 seconds (can be customized), the system prompts a voice-based check-in and initiates a call to emergency services (simulated via tel:911) if the user says "yes"; Custom Staring Time – Users can configure how long they’re allowed to stare before a reminder kicks in; Bad Posture Count – Tracks how many times the user slipped into bad posture during a session.
What We Learned: How to use PoseNet via ml5.js for practical pose detection using only 2D webcam input. How to combine real-time data from body keypoints to infer high-level behaviors like "staring" and "slouching." Working with the Web Speech API for interactive voice-controlled prompts. Handling asynchronous behavior (timeouts, event listeners) in a responsive user experience. Designing for accessibility and minimalism using only vanilla technologies — no frameworks like Bootstrap or jQuery.
Challenges Faced: Accuracy of Pose Estimation: PoseNet sometimes struggles with lighting or partial occlusion, especially for shoulder detection. We had to set reasonable thresholds to balance sensitivity and false positives. Fall Detection Logic: Ensuring the system distinguishes between actual absence and temporary occlusion (e.g., leaning out of frame) required thoughtful timing and cooldowns. Voice Recognition Reliability: The Speech API occasionally misfires, especially in noisy environments, so we built in error-handling and fallback logic. Syncing Visual Feedback with Alerts: Timing screen blur, sound alerts, and posture counting needed careful state management to avoid flooding the user.
ReAlignX is more than a tech demo — it’s a personal health companion for the digital age. This project made us realize how we can creatively apply machine learning in the browser to solve real-world problems. We’d love to continue expanding this with features like: Logging posture stats over time. Gentle screen dimming instead of blur. Integration with productivity tools (like Google Calendar or Pomodoro timers).
Log in or sign up for Devpost to join the conversation.