Inspiration

ChefGPT was inspired by college students who were tired of eating dining hall foods and cooking the same boring meals everyday. Limited by the ingredients available in their fridges and pantry, they thought it'd be cool if there was a way to find possible recipes they could prepare based on the ingredients at their disposal.

What it does

ChefGPT uses an image recognition model to classify pictures of ingredients using the camera/viewfinder on their iOS device. After doing so, ingredients are added to the user's pantry, through which the application generates possible recipes from said ingredients. Additionally, users can explore recipes outside the realm of their pantry using various filters.

How we built it

The image classification model was trained using CreateML, several popular food datasets, and online image scraping to supplement image classes that couldn't be sourced. This model is integrated with our iOS app using Apple's CreateML and VisionKit frameworks. Additionally, we make use of the Spoonacular API to generate lists of recipes, ingredients, instructions, etc. based on the parameters given by the user.

Challenges we ran into

Everyone on our team was completely new to Swift and iOS development, so there was a steep learning curve to the language. Moreover, we had to struggle with getting source control set up across the entire team.

What's next for ChefGPT

Next, we'd like to clean up the UI, work on integrating CoreData more into the application (although basic functionality is there), purchase a premium membership with the Spoonacular API, and conduct user testing to improve the UX and add sought after features.

Built With

Share this project:

Updates