Inspiration
We at first wanted to build a robot that could rescue people trapped in snow, we quickly realized that there's already a much better platform which is dogs, their keen sense of smell helps them immensely in finding victims of natural disasters. So we combined the sensors and IoT data that we would have on a robot and decided to add it to a dog harness, in an effort to increase the number of dogs a single rescue worker can utilize at the same time.
What it does
It's a mobile platform that integrates an android phone for GPS, Lidar for odometry, and a USB-Camera for CV & microphone bark detection; together we can effectively localize a dog from when it first is sent to search and area, all the way up until it finds a victim. On top of using computer vision to identify people, our camera uses it's microphone to detect barks that rescue dogs usually will make when they have found a person. Our platform then also extends an arm out to take the person's vitals with temperate and heartrate sensors and determine if resources like a helicopter would be needed for rescue.
How we built it
We wrote scripts in python and ran them off of the Jetson to gather data from each of our sensors. We kind of used a divide and conquer work division strategy. Matthew did all the amazing CAD and renders, Ryan project managed/created the battery and managed the 3d prints, spencer soldered and integrated gyro and accelerometer sensors, Joseph got GPS working on an android phone and worked on bark detection, and Abhi worked on getting the camera and lidar streaming to a base station and running OpenCV/Odometry.
Challenges we ran into
We struggled quite a bit with getting different sensors working on a raspberry pi and trying to use VIAM's software. We ended up switching to a Jetson and doing some of our processing there but ultimately exported most of our data to our base station for processing. We had interesting results from odometry and found that without a gyro correcting the inherent angle drift our data would be very accurate initially and then spiral out of control. We kind of ran out of time to do integration, but still integrated systems as best we could. We might have been a big overly-ambitious with how many sensors we wanted at the start but we think it made sense for the product and are glad that we got to learn how to interact with all of them.
Accomplishments that we're proud of
We're really proud of how quickly we came up with the idea over an hour into the event, and it's really awesome how many sensors and algorithms we ended up using. The CAD design is also looking amazing and super professional! We've also got an awesome logo and pitch deck for people to come checkout at our table!
What we learned
We learned how to gather GPS and other data from photos taken from an android phone, programming SLAM for a Lidar, and how to interface with Nvidia's open source CV libraries. We also had a blast printing interesting mounts for our project. We learned to solder GPIO and XT-60 connectors, and integrate with so many different sensors!
What's next for Barkpak
If possible, we'd like to actually test it on a much larger dog. As well as try to accomplish some of these things -LTE/5G integration. Dog 2 Dog communication Swarm Dogbotics -Speaker systems Command dogs to direct them away from one another, increasing search radius Speak to potential survivors. -Upgraded Vitals systems Have some better heart rate/temperature monitors (We had an ambient temperature sensor working but it didn't end up being very useful)
Built With
- 3dprinting
- accelerometer
- gyro
- integration-hell
- jetson-nano
- lidar
- opencv
- prusamk3
- prusaslicer
- python
- solidworks

Log in or sign up for Devpost to join the conversation.