DevPost Write-up
Inspiration
We were intrigued with the Logitech stylus and wanted to start with that, but have already done XR drawing and painting apps before so wanted to think about new motivations. Meanwhile my son is in his second year of high school learning Chinese, and it was encouraged that his parents help. I'd done audiobook learning for Chinese but that didn't teach me how to read or write Chinese, especially the stylistic characters they use for their text. Could we use the stylus to learn to write naturally along with the Voice SDKs for a fully immersive learning experience? That was the challenge we undertook.
What it does
This app is a self-driven language learning app that requires no instructor to gain skill in Chinese. The vertical slice skips the basics and brings the player into an immersive environment by detecting the room and placing them at a tabletop, where they’re directed to draw the Chinese character for “Table” using the stylus. That physical action opens up the immersive restaurant learning module, where the player’s environment is replaced with visuals of a Chinese restaurant, along with a food menu and a list of words they can learn. The phrases are shown in English and the player can elicit the audio and practice drawing, being able to write the phrase in Chinese on the table in front of them and have their work evaluated by AI . This app represents a slice of what could be a much greater scope across languages.
How we built it
We assembled a hybrid team of developers and architectural designers, and split tasks between coding research and visual design. We started with a series of moonshot concepts to attempt, and said the final product would be a combination of the success stories. We started with the basics: Scene Understanding, MX Ink integration, and Text-to-Speech. We ran into some roadblocks from the end of day 1 through day 2, but stayed through the night and got a working end-to-end flow just after midnight on Day 2, that the team persisted and got an amazing working demo up at 3am, 12 hours before pencils down. That gave us enough time to start building out the final deliverables, at which point we explored spatial anchors to give it just a bit more polish.
Challenges we ran into
Chinese Speech-to-text All of the Chinese speech-to-text systems we found online weren't ready enough for us to use them in primetime. We'd need more time to explore them all but also would possibly need enterprise-level licenses, or even better native Chinese speakers on the team to help validate the other systems.
Stylus SDK The MX Ink Stylus SDK changed partway through our development, and on the last evening tripped us up with our basic interaction model. It seemed to change quite drastically, as an earlier version had the stylus attached to the RightHandAnchor but the new version had it as a top-level GameObject and required a new input actions asset, so getting the pieces working together evolved quite a bit.
Stylus Line Rendering We used the Line Drawing sample code of the Logitech SDK, but something in the LineRenderer caused certain drawing angles to run perpendicular to the surface, making it hard or impossible to read the line strokes when drawing at certain angles. We edited some shaders and thresholds to make it work enough for this phase of the hack, but would definitely replace that code with mesh drawing given more time and / or continuation of the project.
Accomplishments that we're proud of
Experiments We focused this hack on learning a lot of new technologies and techniques that either we never tried before or we didn’t know if they’d been done before. Through this we learned about Meta’s Presence SDKs (Interaction, Scene, Voice, Haptics) as well as Logitech MX Ink, other Text-to-Speech and Speech-to-Text SDKs, Spatial Anchors, and Image Processing with AI.
New Team Most of the team were strangers but gelled together almost instantly due to the high charisma and openness of the team members. Be it the cultural region or the vetting process of the organizers, there were great people that this hack attracted, and this was much appreciated.
What we learned
Chinese! Well at least the word for “Table”, as it was our “secret crest” that we had to draw to activate the main scene, and we went through the motions so much that we actually learned the Chinese character for it by heart in less than a day. It gave credence to the value of immersive Mixed Reality learning as an advantage over “non-VR” methods.
We also learned a lot about the new tech features and SDK modules on offer by Meta and Logitech, as well as got to know more about the local culture of Istanbul.
What's next for FluencyQuest
We definitely want to continue to explore these mechanics for language learning, and want to extend this into other languages given the opportunity. As we aren’t natural linguists, this is an app we’d like to make for ourselves and it would make sense to pull in language experts along the way.
Built With
- aikit
- aipowered
- logitech
- logitechmxink
- logitechwinner
- openai
- passthrough
- scenesdk
- ttsmaker
- voicesdk
- winner

Log in or sign up for Devpost to join the conversation.