Inspiration
We wanted to create a way to control our mouse quickly, without having to reach out for our mouse, giving us almost the functionality of a touch screen, but without having to reach out for the screen and is able to work on desktops.
What it does
By simply raising your hand such that it is seen your webcam, you could move it around controlling your mouse cursor on the screen, allowing the user to take basic control over their mouse using their hands. It also allows the user to left click by simply raising both hands at the same time, which starts clicking at a rate of around 3 - 4 Clicks Per Second.
How we built it
To build it we mainly made the project around azure's custom vision service, which allowed us to train a model capable of detecting hands from a webcam feed. We trained the model using images we took using a python script allowing us to collect a great amount of pictures at a fast rate. Using python, we were able to use our trained model through Microsoft azure's "azure-cognitiveservices-vision-customvision" library we were able to send the images taken through the webcam in order to get predictions based on the images. Lastly, by finding the ratios between the program's dimensions an d the user's screen dimension we were then able to control the mouse relative to the position of the hand.
Challenges we ran into
We did not know how to integrate our azure custom vision model into our python program, but after research we were able to understand how to use it in our python program.
When adding a left click feature, at the start we wanted to use an Arduino which would send a string over serial port to the program to left click, but we ran into many issues, so we decided to have it so that the user can simply raise both hand at once and it starts clicking multiple times.
When using the program it is hard to be precise and impossible to reach the edges of the screen, so we tried adding a way such that the user could map what area of there webcam is used to control the mouse, but unfortunately, we were not able to get that to work.
Accomplishments that we are proud of
We like that we were able to detect the hands using the azure custom vision service in our program, because before hack the north, none of us had ever used that service before so it was a new concept for us
We were also proud that we were able to get a some what function mouse that is controlled using the users hands in the air
What we learned
To use azure's custom vision service to create an object detection model To integrate the model into a python code
What's next for Magic Mouse
In the future, Magic Mouse could certainly include other features that make it easier to control the mouse, such as the ability to detect certain hand gestures. It should also definitely have a way of keeping the control of the mouse precise, maybe focusing on one part of the hand would help with that. Lastly, working on optimizing the program to run faster is also another idea to focus on in order to improve the quality of the program.

Log in or sign up for Devpost to join the conversation.