Inspiration

As a student with ADHD, one of our teammates often finds it challenging to focus during lectures, making it difficult to absorb and retain information effectively. Traditional learning environments are not always designed to accommodate neurodivergent students, which can lead to frustration and disengagement. Recognizing this struggle, we wanted to create a tool that empowers students like us to not only pay attention but also stay actively engaged with the material in a way that suits our learning styles. Our goal is to enhance comprehension, retention, and overall academic success by providing an aid that transforms lectures into interactive and accessible learning experiences.

What it does

We developed an application that enhances lecture comprehension by generating real-time animations that visually reinforce the professor’s words. Unlike static slides, which can sometimes feel disengaging or overwhelming, our dynamic visuals provide an interactive and intuitive way to grasp complex concepts. By bridging the gap between auditory learning and visual understanding, our tool helps students stay focused, absorb information more effectively, and retain key concepts with greater clarity.

How we built it

We utilized Perplexity Deep Research to strategize the development of generative animations and streamline our coding process. This involved conducting in-depth research on animation techniques, machine learning models, and optimization strategies to ensure smooth and efficient rendering. By leveraging OpenAI models and Groq, we were able to generate highly fluid and realistic animations with seamless transitions between frames, reducing visual artifacts and stuttering. Using natural language processing via the Groq API, the system analyzes incoming text in real-time, extracting relevant animation-worthy topics. When a concept is detected, the API dynamically generates Manim code to illustrate it. The animation pipeline operates asynchronously, leveraging a task queue to render animations efficiently while caching previously generated videos to optimize performance.

We implemented multithreading, allowing different components of the animation pipeline—such as data structure animation rendering, and AI-driven code generation—to run concurrently. This significantly improved processing efficiency, reducing latency and ensuring that animations could run in real-time without noticeable delays. Additionally, we optimized buffer management techniques to minimize lag, enabling near-instantaneous transitions between animation states. Through this approach, we achieved high-quality, dynamic animations that responded quickly to real-time inputs while maintaining computational efficiency. Built with scalability in mind, it supports multiple concurrent WebSocket connections, allowing interactive applications to integrate it for real-time visualization. A caching mechanism also reduces redundant processing by reusing animations for previously encountered concepts.

On the Unity side, we focused on setting up a robust real-time motion synchronization (RTMS) system to ensure responsiveness and accuracy in animation rendering. We also successfully created our Unity environment, carefully configuring assets, physics, and rendering settings to support high-performance generative animations. We worked on connecting Zoom RTMS to the Meta Quest and converting the live transcript from Zoom meetings to real time captions that scroll to avoid overflow.

Challenges we ran into

We initially tried to make this integrate into a VR environment using the Zoom SDK but were unable to put it together. Meta Quest could not integrate mp3 audio files, which were essential for the video processing, making creating an integrated VR environment highly difficult.

Accomplishments that we're proud of

Created a self-contained generative animation agent that is ultra-efficient and performs in realtime in VR using Groq and OpenAI.

What we learned

Integrating VR into our application presented significant challenges, from optimizing performance to ensuring seamless real-time interactions. However, through this process, we gained a deeper understanding of the complexities involved in VR development. Most of us had zero experience with Unity prior to this hackathon, so tackling the steep learning curve was challenging but deeply rewarding. Additionally, we discovered the immense potential of generative animations in education—these dynamic visuals have the power to revolutionize learning by making complex concepts more intuitive and engaging. This experience reinforced our belief that interactive and adaptive technologies are the future of education.

What's next for Immersive-Ed

Our next step is to fully integrate Immersive-Ed into VR, creating an even more engaging and interactive learning experience. This will involve researching the best VR platforms, refining our real-time animation system for immersive environments, and overcoming the technical challenges of seamless integration. We aim to enhance accessibility and adaptability, ensuring that students can benefit from a truly immersive educational tool that caters to diverse learning styles.

Built With

Share this project:

Updates