Inspiration
Our inspiration for LightSight stemmed from the desire to enhance the lives of visually impaired individuals by leveraging the power of computer vision and tactile feedback systems to assist with navigation and object identification. We, as a team, believe that everybody should be able to have a high quality of life. We created this project to act as a valuable addition to someone's life instead of a cumbersome alternative to more traditional approaches.
What it does
LightSight combines real-time object recognition with audio descriptions and intelligent vibrations that adapt to proximity to help visually impaired users create a more detailed picture of their surroundings. Users can simply press a button to hear object labels and feel their surroundings in a more tactile manner.
How we built it
We built LightSight using computer vision algorithms for object recognition, a text-to-speech engine for audio feedback, lidar sensors for distance detection, and vibration motors for tactile feedback. The system was developed using a combination of machine learning techniques and hardware integration. TensorFlowRT running on an Nvidia Jetson Nano provided the compute power needed for real-time object detection. A commercially available lidar sensor provided the accurate distance measurements required to enable precise navigation compared to traditional ultrasonic sensor implementations.
Challenges we ran into
We faced challenges optimizing the response time for real-time feedback and synchronizing the vibrations with the lidar sensor readings. Additionally, ensuring a seamless user experience posed its own set of challenges: integrating the real-time feedback audio with the push button seemed simple but turned out to be one of the more challenging aspects of our project. Moreover, working with microcontrollers and soldering the circuit boards required for the project to function in a portable package proved to be extremely challenging.
Accomplishments that we're proud of
We are very proud of our implementation of real-time object detection, as it is often a challenging feature to implement even with powerful and optimized hardware. Additionally, we are proud of successfully integrating both the lidar and vision systems into a single intuitive package.
Moreover, given our different backgrounds (one of us is a Computer Science major and the other is a Aerospace Science and Engineering Major), we are proud to have come together and used our skills to complement each other and work towards our collective goal for the project.
What we learned
Apart from gaining an appreciation for the struggles faced by the visually impaired, we also learned a lot from our technical hurdles, which we were able to overcome. We faced complexity on the hardware side, as each system is intricate on its own, and even more so when combined into a single usable system. Particularly challenging was working with a power supply that could portably power the Jetson Nano, as it draws a significant amount of energy during usage.
What's next for LightSight
Moving forward, we aim to improve and train our machine learning model to recognize a wider variety of objects, in order to further optimize the speed and accuracy of LightSight. Additionally, we want to develop LightSight to run on a more custom hardware implementation to reduce the size and weight of the final package. Finally, we plan to incorporate a more robust 3D lidar system to help visually impaired individuals gain greater situational awareness.
Built With
- 3d-printing
- arduino-uno
- jetson-nano
- lidar
- python
- tensorflow
- tensorrt



Log in or sign up for Devpost to join the conversation.