Inspiration
We understand that there are people in the world who struggle with memory loss or facial recognition issues. We know that it is not only a challenge for the individual but also the family. We wanted to create something simple and helpful. Our goal was to develop an app environment that is simple enough to support the millions of people who face this challenge. We wanted to create something simple, human, and helpful. Help people maintain a connection to the faces and relationships that matter most to them. And with this in mind, we developed eyeRemember
What it does
EyeRemember is a mobile app designed with simplicity and empathy in mind. The home screen opens the camera directly, allowing users to scan faces in real-time easily. There is a tab that lets family members and caregivers create individual profiles by adding photos, names, brief descriptions, and the person’s relationship to the user. When our AI model recognizes a familiar face, the app displays the person’s name, their connection, and a short description. It provides a gentle reminder to the user about who they’re seeing. Eye Remember is a discrete app that allows users to navigate social moments with confidence.
How we built it
The building process required a combination of front-end and back-end development, each focusing on different core functionalities: Back-end: We utilized Intel AI models, which were trained explicitly for facial identification and recognition. The back-end processes the data uploaded by users. This data is composed of images, descriptions, and relationships. Whenever the user scans an individual, it puts a box around their face and identifies if it's in the system by comparing embeddings from uploaded images of that person to the embeddings from the current frame. We use cosine similarity to determine whether the embeddings are close enough to identify a person and if it did, it would display the information provided on the camera or home screen. Front-end: Developed using React Native with Expo, the front end includes a camera feature that allows users to scan faces. The camera was developed using the TSX files. It also included a tab for caregivers to upload images and descriptions of their loved ones, built with JS files. The interface was designed to be minimal and accessible, making the main screen dedicated primarily to the camera
Challenges we ran into
The front end had numerous issues with Expo Go. It downloaded but did not establish a connection between the system and the computer, which ultimately slowed down early testing. There were some branching issues in Git, causing merge conflicts and preventing commits. This made collaboration very difficult and caused teammates to be extremely careful when pulling or pushing anything. Integrating AI models smoothly into a mobile environment proved to be a significant challenge. However, one of the biggest challenges was figuring out how to efficiently connect back-end to front-end and pathing.
Accomplishments that we're proud of
Considering the short duration of the hackathon, we are proud to have successfully built a prototype that can detect and identify faces with a user-friendly interface. It is a tool that addresses real-life issues in an empathetic manner. Even though there is limited familiarity with Intel AI and its implementations, we are proud to have used our resources and knowledge to create an app that we hope to develop further. We managed to create a camera, incorporate user addition, and implemented AI models within a tight deadline; we are proud of our efforts.
What we learned
Through this hackathon, we got some hands-on experience integrating AI models into a mobile app. We learned how to utilize React Native after many challenges at the beginning. We also learned how to handle real-time data, faces explicitly. We understood from the start that having a good workflow in this project would be key, and that unexpected issues might arise. We now realize how even simple tools can make a big difference for people facing complex challenges.
What's next for EyeRemember
We’re excited about the potential to improve the accuracy and speed of face recognition. Our initial goal was to create glasses with a camera that would display the information; however, due to the limited time of the hackathon, it would have been impossible. In the future, we’d like to not only improve the app and web development of EyeRemember but also include some sort of device, such as smart glasses!.


Log in or sign up for Devpost to join the conversation.