We will be undergoing planned maintenance on January 16th, 2026 at 1:00pm UTC. Please make sure to save your work.

Inspiration

Kids today are growing up with constant screen exposure, and parents struggle to stay informed without crossing into surveillance. We want a future where technology supports children emotionally in a calm, human way, without adding another screen to their lives.

What it does

TedTalks is a screen-free, AI-powered teddy that uses NLP to understand what a child says and respond in a safe, friendly way. Conversations are turned into simple events that reflect mood and topics, giving parents high-level insights without live monitoring or constant tracking.

How we built it

We built TedTalks using a Raspberry Pi as a standalone voice device. The Pi handles audio input and output (with an input USB mic stick and Bluetooth wireless speaker inside the bear), runs speech-to-text, and uses a lightweight AI model to generate responses. Conversation events are sent to a backend where NLP analysis is applied, stored in MongoDB, and then visualized in a parent-facing frontend.

Challenges we ran into

This was everyone on the team’s first time using a Raspberry Pi, which made setup and connectivity challenging. We also ran into issues following YouTube tutorials that used system-incompatible commands. While the backend successfully receives and stores real conversation data in MongoDB, we were unable to fully wire the frontend to the backend within the hackathon timeframe. To keep the demo clear and stable, the frontend currently uses hardcoded data that mirrors real backend data, with full integration planned next.

Accomplishments that we're proud of

We’re proud that we built a working end-to-end prototype under heavy time pressure, learned new hardware and software stacks from scratch, and delivered a concept that feels supportive rather than invasive. Most importantly, we didn’t give up, even when things broke late into the hackathon.

What we learned

We learned a lot about NLP, embedded systems, and full-stack integration, but also about ourselves. We learned how far we can push our limits, how important teamwork is, and that asking for help can unlock solutions faster than struggling alone.

What's next for TedTalks

Next, we want to fully connect the frontend to live database data, improve and speed up the AI and NLP models, and fine-tune responses even further. We also want to explore advanced safety features like keyword-triggered location alerts if a child feels in danger, while continuing to keep the system privacy-first and screen-free.

Built With

Share this project:

Updates