ThinkBack: Turning Listening Into Learning

Inspiration

Have you ever finished watching a long lecture, only to forget everything not even an hour later?

A large portion of real learning happens through audio and video: YouTube lectures, tutorials, recorded meetings, and long-form explanations that are difficult to study from.

Tools like OpenNote can generate strong questions and reflections, and we utilize those tools to deliver a solid feedback system to motivate the user to be constantly paying attention.

ThinkBack is our project where we can maximize the students' learning experience.

capture what we’re hearing, transcribe it automatically, and convert it into active recall and reflection—without breaking focus.


What ThinkBack Does

ThinkBack is a web application (designed to pair cleanly with a browser-extension workflow) that:

  • Captures audio from a live source (e.g., a YouTube lecture)
  • Transcribes audio into text in near real time
  • Sends transcript context to OpenNote to generate:
    • Practice questions
    • Journaling / reflection prompts
    • Summaries / structured notes

At its core, ThinkBack follows a simple learning pipeline:

audio / video → transcript → questions + reflection

The key difference is timing: ThinkBack does not wait until the content ends. It injects recall and reflection while learning is happening.


How We Built It

System Overview

ThinkBack runs as a timed study loop:

  1. Start a session (begin capturing audio)
  2. Transcribe audio into text (chunk by chunk)
  3. Accumulate context (maintain a rolling transcript buffer)
  4. Generate outputs (send relevant transcript slices to OpenNote)
  5. Repeat on a fixed cadence (e.g., every 30 seconds)

This loop transforms passive listening into an active learning process.


Audio Capture (Chunked Recording)

Instead of recording a single massive audio file, we capture audio in short chunks. Chunking keeps files manageable and enables question generation while the user is still watching or listening.


Transcription (Rolling Buffer)

Each audio chunk is sent to a Whisper-style transcription API. The returned text is appended to a running transcript buffer, acting as the system’s short-term memory.


OpenNote Integration (Active Learning Outputs)

Once there is sufficient transcript context, ThinkBack sends prompts to OpenNote for generating practice problems based on what the user has just seen. This step converts raw text into active recall, reflection, and learning artifacts.


The Study Loop (Timed Prompts)

ThinkBack schedules question generation on a timer so it feels like a live study coach. Instead of binge-watching and hoping learning happens later, the system nudges recall and reflection during the content.


Challenges We Faced

Figuring out UX

When figuring out the user experience, our project went through a lot of trial and error. Things that may have seemed obvious in hindsight weren't implemented. As a result, we spent a lot of time trying to make sure that the user flow would be smooth.

Roadmapping the Project

Due to not planning out the whole project in the beginning, we ended up splitting the work in a way that wasn't optimal for all of us to work on. Some would work on the backend, but would accidentally lock 2 other developers from making progression. With a steady plan, we could've maximized our productivity.


What We Learned

Engineering Lessons

  • Timers require state: interval-based systems must respect pause/resume/stop conditions.
  • Chunking is a product decision: it directly impacts learning quality.
  • Async systems must surface progress: without visible states, users assume failure.

Product Lessons

  • The most effective tools extend existing systems rather than replacing them. ThinkBack acts as an adapter that unlocks OpenNote for video-first learning.
  • “AI” only feels valuable when it removes friction and reinforces a habit loop:

watch → recall → reflect → learn


What’s Next

If we continue developing ThinkBack, the next upgrades include:

  • Video-only screenshots (capture the video frame, not the entire desktop)
  • Timestamped notes and questions (click a prompt → jump to the source moment)
  • Export formats (Markdown, Anki decks, shareable study docs)
  • Smarter context selection using semantic boundaries instead of fixed time windows

Closing

ThinkBack started from a single frustration: we wanted powerful question generation on the content we actually learn from—especially YouTube. By combining transcription with a timed recall loop, ThinkBack transforms passive listening into active learning, one prompt at a time.

Built With

Share this project:

Updates