Inspiration

We wanted to practice our skills in AI and in audio tracking. We thought that a project helping people with audio AI implementation would be a unique direction to go in. We found search parties to be of particular interest, due to the amount of confusion that can happen from many events across the search area. Thus, having a real-time information updates and location-based tracking would be beneficial to these groups, especially when they are civilians looking to help out.

What it does

Allows for swift communication between members of a search party. With real-time location updates and event updates, members can easily stay up-to-date on events happening during a search and rescue operation.

How we built it

Built using React Native for frontend and FastAPI for Backend. Expo was used for deployment to mobile. AWS DynamoDB was used for data storage and Firebase for authentication. NGrok was used for production deployment. OpenAI API Keys were used for AI implementation.

Challenges we ran into

We overestimated the scope of the project and, at one point, spread ourselves too thin. We had to make the realization that we were not able to accomplish all of our goals in the limited amount of time the hackathon gave us. While it was difficult to do, we had to let go of some brilliant ideas for the sake of time.

Accomplishments that we're proud of

One accomplishment in particular for this project is enabling real-time location tracking between members of a search party. This was done not using any established mapping API but instead was used by utilizing raw longitude and latitude points. A very basic yet effective system was developed that would even make the Google Map engineers blush.

What we learned

Since it was our first time at Shell Hacks, we learned a lot more about the culture of hackathons. We adapted to working in a team and taking turns, especially in the night, to work on the project. Time management was a big obstacle that we had to overcome, sometimes in the price of lack of sleep.

What's next for SARAI

Next is to enable real-time visual input/output. We imagine something akin to body cameras that can analyze different situations and account for events. One such example is classification of people in dark/cloudy conditions. This could benefit aid in such places for the search party people.

Share this project:

Updates