Inspiration
Where is this meme from? Who is dat? What is that character on the 12th season of my show called again? Who is that actress again? I swear I've seen her somewhere... Who made this superbowl ad?
When our teammates came together for brainstorming, one problem really stood out: where do things come from? As the big Marvel fans we are, origin stories are always the most interesting. So we decided to find a way to translate our physical world into a quick click away from its definition and history.
What it does
Take a quick picture or upload from your device anything and our app will display information and provide context for you! Along with a quick description within our app, we will also link you to outside sources for more information. If your picture is a Marvel comics or movie character, our app will provide you data directly from the Marvel API as well as provide you links to the corresponding Marvel Wiki pages.
How we built it
Blood, sweat, and tears...
But more specifically, the application, made in Android Studio, makes different calls to Google Vision, Google Custom Search, and the Marvel API to obtain context information. Given an image, the application parses the image into a bit string and sends it off to Google Vision for image processing and label identification. We then parse through the returned JSON from Google Vision for an image name. We input the image name into the Marvel API to check if the image contained a Marvel character. If it does, the application will parse through the API response to obtain the description of the hero/villain/antihero etc. and his or her corresponding Marvel Wiki page. If the image is not Marvel related, the image name is loaded into and ran in Google Custom Search for image context provided by Google Search Engine.
Challenges we ran into
The documentation for Google and Marvel API were very scarce and were often in a different language. We coded the App in two different languages based on different documentation, but could not consolidate. We ended up producing two different versions with our app with one a bit more fleshed out than the other.
Our other big challenge was getting accurate results on photos taken of the physical, digital, printed, etc. world. We had around a 70-80% success rate, however, we also had outlier results that had nothing to do with our photos. If given more time, we could train models with machine learning to improve image detection accuracy.
Accomplishments that we're proud of
Making successful Google and Marvel API calls Accurate JSON parsing 70-80% success rate for image detection accuracy Custom UI/UX
What we learned
How to code different solutions simultaneously and consolidate.
What's next for Stark_Media
Machine Learning! Input correctly identified photos into buckets to train smarter and faster image recognition.
Improved APIs and developer tools
Built With
- android-studio
- google-cloud
- google-custom-search
- google-vision
- marvel-api
- resty

Log in or sign up for Devpost to join the conversation.