Inspiration

A couple of our teammates had been working together to get in shape and eat healthy the past few weeks, but realized how tedious it can be to track down nutrition information for every single meal. We realized that if we're having this problem, there are definitely many more people who are trying to get fit or keep a health condition in check who need to track their nutrient intake too. That's why we built NutriScanner.

What it does

It takes an image that the user uploads through our site and runs it through a deep learning algorithm to closely estimates the amount of calories, fat, protein, saturated fat, carbohydrates, fiber, and cholesterol.

How we built it

We built out the backend using Python and Flask and connected it to interact with a HTML/CSS frontend. We got over 20,000 images of food by scraping several different food recipe/blogging sites. On the deep learning side, we preprocessed these images to compress to the same size while losing as little information as possible. Then, we trained a convolutional neural network on the Google Cloud Engine and integrated it with our backend.

Challenges we ran into

Getting all of the components to work together was a difficult task. Figuring out how to pre-train a model with external GPUs and get it to work with the backend ended up taking more time than expected. However, through some clever pickle serialization and some functions to import between scripts, we figured out how to get the model to consistently and efficiently work on various images.

Accomplishments that we're proud of

We're proud of how accurate our deep learning algorithm was on our randomized test dataset in such short time, with a mean squared error loss of .0069. Our team dynamic was an accomplishment in itself. From the beginning to end, we all bounced around feature and technology ideas on the whiteboard and help each other fill in knowledge gaps when implimenting our subsections of the project. Seeing various experience levels interact to make one end-to-end project was amazing.

What we learned

More than anything, we learned how to learn. There were several technologies we had never seen before, so we learned how to pick up new technology and work it in with what we already know. Additonally, we learned how to impliment machine learning algorithms with a backend, teaching us how to make these algorithms usable.

What's next for NutriScanner

20,000 images is a lot, but there are also a lot of different kinds of food. We hope to expand our dataset to be more robust to rarer kinds of food. We also plan to spending more time iterating through different sequences of layers for our CNN to see what truly works best. In addition, we will be giving the site a more modern UI to keep on-the-fence users engaged. Lastly, we'll be scaling the app by deploying it through AWS and implimenting a real-time database so users all over the world can input their meals and get instant feedback simultaneously.

Share this project:

Updates