Inspiration
The daily challenges faced by blind individuals, such as navigating physical obstacles that are often overlooked or difficult to detect, inspired us to create a solution that significantly improves their mobility and safety. From sharp tools to low-hanging objects and vehicles, the dangers are pervasive, both indoors and outdoors. We aimed to bridge the gap in accessibility with an AI-driven solution that offers real-time, intelligent assistance for the visually impaired.
What it does
VisioSense is an AI-powered Blind Assistance System that leverages real-time object detection and wireless feedback to help blind individuals navigate their surroundings safely. Equipped with sensor-embedded glasses and a voice-based interface, the system detects obstacles, calculates their distance, and provides auditory alerts. Whether indoors or outdoors, VisioSense enhances mobility by recognizing small objects, overhanging hazards, and vehicles, enabling users to avoid accidents.
How we built it
We built the system using the SSD_MOBILENET algorithm for real-time object detection, integrated with TensorFlow APIs. The sensors embedded in specially designed glasses capture visual data, which is translated into text and converted into voice messages. This data is processed using Android mobile webcams to enhance both indoor and outdoor navigation. The voice-based wireless feedback system calculates object distances, offering immediate alerts when hazards are nearby.
Challenges we ran into
We faced several challenges, including ensuring accurate detection of small and thin objects, like sharp tools and metal poles, that are often difficult for traditional methods to detect. Integrating real-time object detection with voice feedback posed technical difficulties, particularly in ensuring smooth interaction between hardware sensors and the software processing pipeline.
Accomplishments that we're proud of
We are proud to have successfully created a system that significantly improves the independence and mobility of visually impaired individuals. One key accomplishment was developing the sensor-equipped glasses and fine-tuning the object detection algorithm to identify not just large obstacles but also subtle hazards like overhanging objects and low-lying obstacles, offering a comprehensive solution.
What we learned
Through this project, we learned the intricacies of combining hardware sensors with AI-driven software, especially in the context of real-time object detection. We also gained valuable insights into user interaction, realizing the importance of intuitive, voice-based feedback systems for seamless and non-intrusive navigation.
What's next for Blind Assistance System
Our next steps include refining the accuracy of obstacle detection, expanding the range of hazards it can identify, and improving the user interface for a more personalized experience. We also plan to explore integration with additional mobility tools, like GPS and mapping systems, to offer enhanced outdoor navigation for the visually impaired.
Log in or sign up for Devpost to join the conversation.