Inspiration

Imbalanced diet is one of the most overlooked habits, which leads to health problems like obesity, cardiovascular, diabetes, osteoarthritis and malnutrition. People often tend to ignore nutritional values and prefer tastier options rather than healthier alternatives. We wish to address this problem and provide a straightforward solution using information about the nutrition content of the ingredients obtained by scanning his grocery list.

What it does

The app scans a grocery list made by the user and analyzes the nutritional information gathered from the ingredients. The user is recommended healthy recipes that can be made with the ingredients. The healthy recipes are clustered together according to the recipe directions. The app is experienced in an AR interface where the user can select the recipe to show the nutritional information of the dish in a graphical format, the recipe information like the ratings , cooking time, and the ingredients that he will be missing, if any.

How we built it

We host a Flask server on python which receives an image of the user's grocery list from the mobile app and use Google Cloud Vision API's OCR to recognize the handwritten ingredients. A custom Spell Checker which implements Levenshtein Distance to correct any misspellings from the output of the OCR. After obtaining the ingredients list, we send the nutritional values by scraping individual nutritional facts about the ingredients and represent the grocery list's nutrition through a radar plot. We made a distributed memory Doc2Vec model to train on 5000 recipes scraped from the net. The scraped data was cleaned and pre-processed for training using NLP techniques (Stemming, Lemmatizing, stop word reduction) and removed garbage data by Regular Expressions. The user is presented with an AR 3D map of recipes, where similar recipes are clustered together using Doc2Vec. When clicked on a certain recipe, the user can find it's details like ratings, number of reviews, cooking time, and the ingredients that he will be missing along with a button to link to the recipe site.

Challenges we ran into

One of the greatest challenge we faced was to create an architecture for the server which could fetch the information from the web without hindering the user experience by causing delays. This is done by keeping a cache of all the queries done to the server. We encountered problems creating maps for the nutritional fact and properly training and mapping the Doc2Vec encoding the 3D space for Augmented Reality. Also, we spent an immense amount of time cleaning and preprocessing the dataset to get high accuracy with our model.

Accomplishments that we're proud of

Once trained, the Doc2Vec model works extremely efficiently in finding intricate similarities in the way you make recipes. This gives an accurate metric of mapping recipes in 3D space. Also, the cache algorithm on the server makes the server 10 times faster and the user experience is never delayed.

What we learned

We have had numerous insights from our models predictions on the dataset like the intense similarity of cooking methods for ingredients having similar nutritional values. We learnt to handle several concurrent handshakes between the server and the app to make the user experience as smooth as possible.

What's next for Nutri.grapher

We plan to optimize our OCR in Nutri.grapher so that it can give a good prediction for cursive handwriting or directly reading ingredients from a shopping bill. Nutri.grapher will also have a system to store user health data and deficiencies and suggest ingredients and recipes accordingly to keep you healthy. Third, we plan to have a multi-object detection system that directly scans ingredients in real world for ease of use. Lastly, being a datacentric application we want to have other models which can infer other relations between recipes.

Built With

Share this project:

Updates