Inspiration
I've had experience with ImageNet in the past. So when the theme was Art, I knew I wanted to make an image based recommender/analyzer
What it does
Backend component takes in any image url and outputs closet matches in the national gallery of arts database. There is a frontend that takes in any image and shows close matches in the NGA.
There is another frontend that is VR that uses the vectors generated from NGA images and displays them so you can see relationships.
How I built it
Backend is in Python. We downloaded the images using python. All the vectorization was done using Python and tensor flow.
Challenges I ran into
Getting the backend and frontend to communicate as well as getting the API in the cloud.
Accomplishments that I'm proud of
The vectorization and api is really cool. The front end design looks clean as well.
What I learned
How to put APIs in Google Cloud Platform.
What's next for Art Match
Get the VR portion working so you can upload an image of your choosing and go to that location. Also, as you move around it should dynamically load in some of the 40,000 images.
Improve the vectorization. Scramble the images so vectors are more based on style and less on features.
Built With
- annoy
- bootstrap
- firebase
- flask
- html5
- imagenet
- inception
- javascript
- jquery
- python
- scikit-learn
- tensorflow
- typescript
- unity

Log in or sign up for Devpost to join the conversation.