Inspiration

The idea for SpectralFocus Teacher came from a simple frustration: most students genuinely want to stay focused, but long study-assistant apps either distract them with too many features or compromise privacy by streaming data to the cloud. We wanted something quiet, local, and helpful — a tool that passively supports focus without ever feeling intrusive. The concept of a “pocket shadow teacher” who observes only enough to help, not judge, inspired us to build an on-device attention and posture coach powered by lightweight computer vision.

What it does

SpectralFocus Teacher uses the phone’s front camera and on-device ML to detect posture changes, eye closure, head angle drift, and general distraction patterns. It converts these signals into a per-minute Focus Score, logs distraction events, and surfaces short, actionable feedback like “Lift your chin,” “Eyes drifting,” or “Tiny break?” All processing happens locally on the device — no images are saved, uploaded, or stored. It’s designed to be a quiet companion that improves focus through micro-interventions rather than long chats.

How we built it

We built an Android prototype using CameraX for real-time camera frames and ML Kit Face Mesh for lightweight facial-landmark detection. From these landmarks, we applied heuristic math:

( E = \frac{h_{\text{eye}}}{w_{\text{eye}}} )

Head pitch and yaw derived from keypoint vectors

Face-box area as a proxy for distance and slouching These signals were aggregated into a scoring formula that updates every minute. A local rules engine triggers feedback when thresholds are crossed. No cloud API is involved; all inference runs on ARM hardware. We wrapped the system in a simple UI showing score history, session duration, and interventions.

Challenges we ran into

The biggest challenge was balancing accuracy with performance. Heavy models drain battery quickly, while too-simple heuristics produce noisy results. We fine-tuned thresholds, downsampled frames, and optimized detection frequency to maintain both speed and reliability. Another challenge was ensuring privacy. We removed all frame capture, recording, or logs containing raw images. The app only stores numerical signals. Creating meaningful feedback using very small data points was also tricky — we had to design cues that were helpful but not annoying.

Accomplishments that we're proud of

We’re proud that the entire system runs 100% locally without sending a single byte to the cloud. Achieving real-time face-landmark tracking with smooth performance on low-end devices was a major win. We’re also happy with how “human” the feedback feels even though it’s generated from simple heuristics rather than a large model.

What we learned

We learned how powerful lightweight machine-vision models can be when combined with good heuristics. More importantly, we learned the value of designing for behavior change, not just AI output. Subtle interventions matter more than complex features.

What's next for Untitled

We want to add a calibration flow, session analytics, streaks, ambient-sound detection, and a fully offline LLM for deeper guidance. Our long-term goal is to make SpectralFocus Teacher a personal, private, everyday study companion.

Built With

  • acceleration
  • analysis
  • android
  • arm
  • camerax
  • compose
  • detection
  • face
  • google
  • heuristic
  • jetpack
  • kit
  • kotlin)
  • landmark
  • lite
  • local
  • mesh
  • ml
  • neural
  • on-device
  • storage
  • tensorflow
  • ui
  • xml
Share this project:

Updates