Built with

  • C++17, CMake
  • SmartSpectra / Presage Physiology SDK (REST integration)
  • OpenCV (video capture + GUI)
  • V4L2 (camera interface in WSL)
  • WSL2 + WSLg + usbipd-win (Windows webcam passthrough)
  • Python 3 (heartbeat-to-music MIDI generator)
  • MIDI tooling (optional: FluidSynth + SoundFont for WAV rendering)

Inspiration

We wanted to turn real-time physiology into something people can feel immediately. Seeing a live pulse is cool; hearing your heartbeat shape music is unforgettable. MuSage started as a simple “Hello Vitals” demo and evolved into a biofeedback experience that blends health signals with sound.

What it does

MuSage captures a live camera feed, extracts pulse and breathing metrics, overlays vitals on the video stream, and can generate music driven by heartbeat data. The goal is a responsive loop: see your vitals, hear your rhythm, and understand how your body changes in real time.

How we built it

  • Integrated the SmartSpectra SDK and Presage Physiology REST API with a C++ app.
  • Built a robust OpenCV video pipeline that streams continuously in WSL.
  • Added a Python script that converts heartbeat samples into multi-track MIDI music.
  • Tuned camera settings and formats to keep the stream stable (resolution/codec negotiation, V4L2 parameters).

Challenges we ran into

  • WSL webcam passthrough was fragile: USB attach, device locks, and missing drivers were frequent blockers.
  • MJPEG streams occasionally produced corrupt frames in WSL; format selection and resolution constraints mattered.
  • GUI threading with OpenCV in WSL required careful handling to avoid black frames or freezes.
  • Keeping the stream alive when the subject is still required tuning and defensive handling.

Accomplishments that we're proud of

  • Stable real-time vitals overlay in a WSL environment.
  • Reliable camera streaming with explicit device settings and robust handling.
  • A heartbeat-to-music pipeline that turns biometric data into coherent MIDI tracks.

What we learned

  • Camera formats and driver behavior matter as much as application logic.
  • Real-time media pipelines need defensive engineering (threading, format conversion, frame validation).
  • WSL is powerful, but hardware I/O requires extra care and clear diagnostics.
  • Small UX touches (like overlays and audio feedback) make physiological data far more engaging.

What's next for MuSage

  • Real-time audio output (not just MIDI) with timbre that adapts to stress/relaxation states.
  • Better camera/format auto-detection across platforms.
  • A polished UI and presets for different music styles.
  • Packaging for native Windows and Linux to reduce setup friction.

Built With

Share this project:

Updates