Inspiration

After deciding on the Health Track, we recognized the significant challenge of monitoring random food intake. To address this issue, we developed FOOOOOD—an innovative web app that enables users to effortlessly track their calorie intake by uploading images.

What it does

Our web application empowers users to precisely monitor the calorie count of each food item on their plate. To enhance accuracy, we address potential model inaccuracies by presenting users with various food variations, each associated with distinct nutritional facts. For instance, if butter is detected on the plate, users receive nutritional information for both Butter with Salt and Unsalted Butter, ensuring a more nuanced and accurate representation of their dietary intake.

How we built it

FOOOOOD relies on a robust technology stack, featuring ReactJS for the frontend and Python, Flask, and YOLOv8 (leveraging Ultralytics) for the backend

Challenges we ran into

One of the main issues that we ran into was choosing the model that we were gonna use for food segmentation. Initially, we wanted to use FoodSAM, a combination of SAM, object detector, and semantic segmenter, to classify our images, but we couldn't get the dependencies required to work in our machines and so we decided to pivot into other pre-trained models and we landed into YOLOv8.

Accomplishments that we're proud of

We take pride in overcoming initial challenges, notably our struggle to develop models on the first day. Despite this setback, we successfully reached the finish line with a polished product, demonstrating our resilience and commitment to the project. Learning and implementing YOLOv8, a powerful and versatile model, was a significant achievement.

What we learned

Throughout this project, we immersed ourselves in various food segmentation techniques, delving into the realm of state-of-the-art models and exploring the vast possibilities within computer vision. We learned how state-of-the-art models handles food segmentation and each computer vision problem differently.

What's next for FOOOOOD

For the next phase of FOOOOOD, our primary focus is on enhancing model performance through the implementation of larger models and higher-resolution images. Presently, we've trained the model using images with a resolution of 500x500. Given additional time and access to a more powerful computing environment, we aim to conduct training with larger images and an expanded dataset. This approach will enable us to fine-tune the model, further improving its accuracy and robustness in food segmentation and nutritional analysis

Built With

Share this project:

Updates