Inspiration
Between learning languages in school, or utilizing Duolingo, there is one big pain point - immersion. You learn a lot about the language, but much of it might not be practical for you. That’s why we created Moli, where the world around you becomes your classroom.
What it does
Moli allows you to immerse yourself within your own environment and learn the translations of daily objects and moments of your own life.
How we built it
We combined the snapchat AR spectacles SDK with Vision learning models finetuned on hugginface and marked objects with a label in english and their spanish translation. We also utilized the new snapchat supported supabase integration to save these objects into on our database for a unique learning experience for each and every user.
Challenges we ran into
Figuring out how to use the Snapchat-Spectacles SDK was quite difficult at first but as we kept messing around with the glasses we were able to get a hang of developing on the spectacles!
Accomplishments that we're proud of
We are proud of accomplishing depth caching for our objects with an intuitive UX/UI display above each object detailing the translation of each object we see.
What we learned
We learned how to develop AR applications using Lens Studio and had a blast building with Snapchat!
What's next for Moli
We would like to iterate on the user experience and optimize the latency between sending an image request to our huggingface backend instance and getting an object label -> translation -> bounding box on the snap spectacles
Built With
- gemini
- huggingface
- javascript
- lens-studio
- livekit
- snap-spectacles-sdk
- snapchat
- supabase
- twilio
- typescript

Log in or sign up for Devpost to join the conversation.