Inspiration

  • Photo scanning has evolved into something huge lately with NVIDIA and other companies creating complex AI models to create 3D models out of 2D pictures. Now, can a group of students manage to create something similar within 36 hours? This project will answer it.

What it does

  • The system takes two pictures of you (front and back) and outputs a 3D model displaying in AR on the palm of your hand.

How we built it

  • AI model for processing images: OpenCV, SAM model, Tensorflow, MediaPipe
  • Converting 2D into 3D: Blender Python API
  • AR Displaying: MediaPipe

Challenges we ran into

  • We initially used GluonCV but right about 3 hours before the deadline, we found out that GluonCV can only be run on Google Colab environment as it requires an old version of Torch which can not be installed locally without a VM. So, we cried, screamed in pain, got depressed, got back up, kept going, somehow magically everything came back together once we tried implementing MediaPipe. Now we are rushing towards the end.

Accomplishments that we're proud of

  • We finally finished it although it is not 100% completed as we expected. ## What we learned
  • We learned how to get back from mistakes as quickly as possible and be efficient with our time. And we also learned that people who like going to hackathons are probably slightly masochists ## What's next for SculptAI
  • We intend to edit details that we cannot totally finish in 36 hours of HackUMass to make sure it run properly. After that, we research more about Machine Learning models and AR technologies to develop them in the current project.

Built With

Share this project:

Updates