Inspiration

Most fitness apps only count reps or time. They don’t tell you why your squat feels wrong or how to fix it. Personal trainers solve this—but they’re expensive, inaccessible, and not scalable. Oops All Motion brings real-time form correction to anyone, anywhere, using low-cost hardware and on-device AI.

What it does

Oops All Motion is a hardware-powered fitness coaching system that:

  • Tracks exercise movement using an Arduino Uno Q connected to a webcam and running an Edge AI classification model
  • Analyzes exercise form through webcam feed locally (no cloud, no latency)
  • Sends real-time feedback and video feed to a Samsung Galaxy XR headset

Displays a digital twin that:

  • Identifies what’s wrong with your form
  • Visually demonstrates the correct movement
  • Helps you fix mistakes while you’re mid-rep

How we built it

Webcam Motion Capture A standard webcam records the user performing an exercise (dumbbell curl).

On-Device Edge AI Classification The Arduino Uno Q processes the webcam feed with an Edge AI model running directly on the Arduino Uno Q.

Exercise Classification The model classifies dumbbell curl exercises into several discrete form states (idle, back arch, elbow out, and perfect)

XR Feedback Loop Classification results are streamed to a Samsung Galaxy XR headset, where: Visual cues highlight form issues A digital trainer avatar demonstrates the correct movement Users receive immediate, intuitive feedback

Challenges we ran into

We had a lot of false starts on this project. For our project, we really needed a premade model, but at first we were very confused about how to use edge impulse for this purpose. We were encouraged to use projects already in edge impulse, but there wasn’t anything suitable. We tried using mediapipe’s pose estimation solution, but that doesn’t have aarch binaries so we bypassed it in favor of trying alternatives. We found a .onnx movenet model, and found out that edge impulse has a bring your own model feature, but it failed to convert the .onnx to a .tflite and errored out. Then we simultaneously tried finding a tflite and using the onnx runtime to run the model directly. We solved both of these at around the same time, and edge impulse was actually slightly better at this. We then moved forward with using edge impulse. The trouble with the whole process was that the edge impulse documentation was extremely hard to search. We had no good way of seeing possible ways of solving our problem with it. When we stopped using it temporarily, we were always able to find a solution much faster. In addition, existing solutions are always almost entirely orchestrated by python, and thus it’s obvious how to compose these solutions together. Edge impulse doesn’t have this advantage. It seems like, once you understand how to use it, it’s quite powerful

Edge Impulse documentation was hard to work through, and the edge impulse linux cli was very restrictive; we ended up downloading the tensorflow quantized model generated from Edge Impulse and ran it directly on the arduino.

Accomplishments that we're proud of

We were able to successfully run the model on the Arduino Uno Q, and process the webcam feed from the edge impulse cli runtime into a bson format that the Galaxy XR was able to process. We also managed to stream the camera feeds to the AI model and the Samsung Galaxy XR, providing realtime feedback with low latency. We also 3d printed a chassis to small form factor so the device is fully portable (perfect to bring to a gym)

What we learned

We learned how to use the Edge Impulse ecosystem to build AI models, how to develop and deploy said models to the Arduino Uno Q. We learned a lot about embedded programming on the Arduino UNO Q and setting up websockets to feed camera data to the Samsung Galaxy XR headset, and render the feed in Unity.

What's next for Oops All Motion

Support for additional exercises beyond dumbbells (yoga, physical therapy) Multi-view or multi-sensor tracking Physical therapy and rehabilitation use cases Combine movement-based XR with Flow meditation modes (e.g., MOVE → CALM)

We can make use of the edge-impulse-linux cli to feed more data into the edge-impulse project, allowing us to continue to train and improve our model. We can also train the model to accept more exercises, yoga is a promising area as form is important.

Potential Impact and Scalability Approach

We can design a smaller and cheaper device, or move the application onto a smartphone, allowing anyone to access the app in many more settings. Our tool already has a small form factor, so the system can scale to home, rehabilitation, and wellness settings, expanding access to safe physical activity through low-cost, on-device AI.

Built With

Share this project:

Updates