Inspiration

Navigating the world poses daily challenges for visually impaired individuals. Traditional aids like white canes and guide dogs offer limited range and adaptability. We set out to create a solution that enhances perception in real time using modern sensor technology to make navigation safer, smarter, and more responsive.

What it does

NavAid helps visually impaired individuals navigate their environment more safely and independently by detecting obstacles in real time. Using a LiDAR sensor, an IMU, and a wide-angle camera, the system continuously scans the user's surroundings to detect nearby objects, while the IMU tracks orientation and motion. When an obstacle is within a certain proximity, the system triggers LED indicators to alert the user of its location and distance.

How we built it

Hardware: Our system relies on three key sensors: a LiDAR unit, a 9-DOF IMU (Inertial Measurement Unit), and a 220-degree focal camera. These components work together to capture a comprehensive view of the user’s surroundings. The LiDAR provides precise distance measurements by emitting laser pulses and measuring the time they take to reflect back. The IMU captures orientation and motion data across three axes (gyroscope, accelerometer, magnetometer), allowing the system to track direction and movement. The wide-angle camera complements these sensors by offering additional contextual awareness when needed. All sensor data is processed locally on a Jetson Nano. To improve latency, we performed all computations onboard, improving efficiency by not having any off cloud computations.

Software: Using the ROS framework, we developed a modular pipeline that processes the incoming data streams from each sensor. ROS helped with asynchronously combining sensor data and publishing data to a distributed network.

LiDAR: The point cloud data generated by the LiDAR is continuously scanned to detect nearby surfaces and objects. We apply filtering to identify potential obstacles and environmental boundaries, which lights up an LED light if an obstacle is detected nearby.

IMU: To ensure accurate orientation and movement tracking, we filtered raw IMU data using a complementary filter algorithm, which combines gyroscopic readings with stable accelerometer data. This helps correct for drift and noise, improving the consistency of spatial positioning.

Challenges we ran into

Sensor instability and drift: One of the most significant hurdles we faced early on was IMU drift, particularly with the magnetometer. The IMU is placed close to the system’s battery, which introduced local magnetic interference and significantly disrupted the magnetometer readings. This interference caused instability in the yaw values, which made it tricky to calibrate.

Hardware: Wiring was a challenge due to having to manually connect each LED to the Jetson Nano. It was time-consuming, but ultimately rewarding to see the finished result in action. Another hardware issue we faced was configuring the Jetson Nano to host a Wi-Fi hotspot while simultaneously managing connections to two computers and multiple sensors. It took a lot of trial and error to optimize the network settings, but we eventually found a configuration that worked for us.

Accomplishments that we're proud of

Real-time obstacle detection: We successfully integrated LiDAR and IMU data to detect obstacles and display the alerts instantaneously.

Onboard processing: All computations run on the Jetson Nano, meaning the system's computations are entirely onboard, reducing latency as we do not need to send any data through the cloud.

Reliable sensor fusion: Despite initial instability, we implemented a complementary filter to fuse IMU data and correct for drift, resulting in smooth and consistent orientation tracking.

What we learned

Sensor data is messy. Raw LiDAR and IMU outputs are noisy and often unreliable without proper filtering and calibration. We learned how critical sensor fusion and smoothing algorithms are for stable results.

Debugging hardware takes patience. Simple things like wiring errors or unstable hotspots can stall progress, and we learned to approach these issues methodically.

What's next for NavAid

Audible alert system: Currently, NavAid uses LED indicators for obstacle warnings. Our next step is to implement an audible alert system using spatial audio cues to alert users more precisely.

Full camera integration with object detection: We plan to upgrade the camera system with real-time object detection, enabling NavAid to not only detect obstacles but also identify them (like distinguishing between a person, pole, or doorway ), creating better awareness. Cameras would provide much-needed context for scenarios such as walking down the stairs. This would provide us the distance and direction of people or objects. We also plan to overlay the camera data onto the LiDAR data for better visualization and pathing.

Built With

Share this project:

Updates