Inspiration
In society there is a specific lack of tooling made to help the disabled in many aspect of life most people take for granted. Something as simple as walking around could provide unique challenges to someone visually impaired, and SenseBelt acts to remedy this difficulty.
What it does
SenseBelt is a tool you can wear around your waist which will give you haptic feedback incase you are about to bump into something.
How we built it
The belt consists of a RealSense D435I depth camera as input, a Jetson Nano for the main compute unit, a ESP32 driver board for a set of TITAN Haptic motors which act as an output. The camera is mounted at the front of the belt, and the titan haptic motors are placed on either side of the belt as well as directly below the camera. Fusion 360 was used to design mounts for the camera and motors, which were then inserted around the belt for a secure product.
Challenges we ran into
Initially the scope of this project was much larger, consisting of a completely wireless solution that would be capable of object avoidance, but SLAM, enabling it to remember previous environments and update them as it changed. The issue we ran into with the wireless solution was caused mainly by Eduroam being difficult in a very SSH and Docker container heavy project. This was resolved by directly connecting an ethernet cable to our coding laptop. A hotspot couldn't be used either because the data generated by the camera was much too large.
The SLAM idea was scratched halfway throughout the hackathon, as the complexity intertwined within the packages was too much to unwind if e wanted to guarantee a product. Similarly, the belt was meant to have contextual answers in a form similar to a behavior tree, but time restraints prevented this from forming in any significant manner.
Accomplishments that we're proud of
What we learned
What's next for SenseBelt
Many of the initial challenges that were faced became stepping stones into the potential for what the SenseBelt could become. In this hackathon only passive object avoidance was accomplished, but with continued development, active guidance and navigation could be plausible. Additionally the context based behavior withing passive object avoidance could also be established, which would vastly increase the use cases of this belt.
Built With
- arduino
- autodesk-fusion-360
- c++
- issac-ros
- python
- ros2
Log in or sign up for Devpost to join the conversation.