Inspiration
We were inspired by Nintendo's Miis and classic games like Tomogotchi where the aim of the game is to simply watch little characters run around and live their lives. The twist with our version is that the user can take a photo of their own face and the game will automatically create a little character, or a 'Sprimp'. The idea is that after multiple people have created their Sprimps, they can watch them run around in the 3D environment and interact with each other.
What it does
We use a computer vision pipeline to detect the user's face from a given image or an image taken with the webcam. We use image processing techniques to detect the face region of the image, crop the image to the face, and remove the background of the detected face to get a perfect face segmentation. We then overlay the face onto a UV map of the Sprimp 3D model. This Sprimp is then placed in the 3D environment and uses a very simple AI to run around.
How we built it
The image preprocessing pipeline used to extract faces from images is built in python, focusing on the mediapipe library for face detection and opencv and pillow for various image processing operations. The 3D world is built with javascript's three.js and styled with html and css. For the models themselves, we use Blender for mapping operations.
Challenges we ran into
Dealing with 3D model meshes proved especially difficult as we have limited experience with 3D modelling and three.js. However, we were able to successfully map a 2D image onto the 3D model using UV maps. The image processing for the face detection and texture mapping was also fairly difficult, due to poor library documentation for some resources. On top of this, it was difficult for us to make time to set up a server to execute our python backend using the javascript front end.
Accomplishments that we're proud of
Despite being a smaller team than usual, we were able to accomplish a lot of computer vision, graphics and 3D modelling tasks over the duration of the hackathon, fields we have very little experience in. While our project may not be the most polished, having a working image processing pipeline that can successfully take a user's face from a webcam and map it to a real 3D model with ready animations was a huge win for us!
What we learned
Inter-language connectivity is very hard! And so are hackathons without our lovely third member Oli Sharp. :)
What's next for Sprimper
We would like for the models to be able to interact with each other, with different animations and verbal communication. It would also be great to have a server set up so that users could create models directly from the web app instead of through the IDE.

Log in or sign up for Devpost to join the conversation.