NaviGoose is wearable device custom-trained to help visually impaired users navigate walking spaces in real-time 🪿
Inspiration
One in five Canadians aged 65-75 will experience vision loss due to cataracts (Cleveland Clinic). We are inspired by the desire to help elderly individuals with partial vision loss, such as those affected by cataracts and glaucoma, regain more freedom outside.
What it does
We understand the constant need to improve accessibility and want to make a difference. Thus, we built a device that you can easily attach to yourself. It will detect potential danger:
- braile blocks
- sidewalk curbs
- puddles
- pot holes, etc..
and promptly warn the user over TTS, identifying the danger and its relative location.
How we built it
- Raspberry Pi Zero W with 5 megapixel camera module, streaming video feed to a laptop over TCP
- Roboflow to annotate and gather datasets
- Trained a custom model on 4000+ images through YOLOv11 for real-time object detection
- Rotationing servo motor to increase the field of view
- Whisper TTS
Challenges we ran into
- Compute requirements of training a model on a laptop (had to rent a cloud GPU on lambdalabs)
Accomplishments that we're proud of
- Finishing a hardware hack with limited resources!
What we learned
- Model training
- Streaming video feed
What's next for NaviGoose
- Train the model to detect and identify more types of obstacles (maybe pathfinding?)
- Fully implement our idea of increasing the field of view of the camera via a servo
Why Navi"Goose"?
- We started out by trying to warn users of nearby geese poop, however, we realized that this could be expanded to much more
- Hence, we were inspired later on in the process by the notion of increasing accessibility
Built With
- lambdalabs
- nextjs
- python
- pytorch
- raspberry-pi
- react
- roboflow
- yolo
Log in or sign up for Devpost to join the conversation.