Inspiration

In classrooms and creative spaces, collaboration often slows down because traditional screens and projectors limit how people interact with content. Inspired by the need for more natural, intuitive communication, we created a gesture-controlled glove. With a simple wave, point, or pinch, users can manipulate what’s on the screen, turning ideas into actions instantly. Think about how your professors are often passionately lecturing their content, but oftentimes need to go back to their computer to switch a tab or start a demo. In these few seconds, students often pull out their phones and get distracted, losing momentum. Technology such as ScreenWave makes sharing knowledge as seamless as moving your hand.

Implementation

The glove allows the user to rotate their hand to move their cursor, and tap their thumb and pointer finger together to perform a click.

To implement our project, we used accelerometers, magnetometers, a button, and some Arduinos. We mounted three accelerometer/magnetometer sensors on the glove to gather data, and used python to translate relative hand position to mouse movement on the computer screen. The button enables the user to turn the glove control on and off (so that the user can make hand gestures without messing up what is on their screen).

Future Steps

Future steps would be to greatly reduce our hardware overhead. Specifically an I2C multiplexer would quarter the hardware used. Additionally, we'd like to calibrate our glove to be used with a projector or other large screen to increase the collaborative potential of our device.

Greatest Challenges

One of our greatest challenges were hardware issues. Since we had limited access to electronics over this short worktime, this resulted in a lot of janky workarounds. For example, we could not connect the three accelerometer/magnetometer sensors to a single Ardunio since it only had one I2C port, meaning we had to use three separate Arduinos and open three separate communication ports on the software side.

Additionally, we struggled with translating raw accelerometer/magnetometer readings to cursor position. Since we had limited sensors available, figuring out how to use these two readings to translate certain motions to cursor actions was a hurdle. We tested a couple different heuristics for determining whether or not the user attempted to make a "click" action by bringing their thumb and pointer finger together by first experimenting with acceleration thresholds, and then switching to calibrating magnetometer data and measuring the difference between gestures.

Built With

Share this project:

Updates