Inspiration

Twitter often times serves as a divisive tool where users spew hateful rhetoric and further polarize in political views.

No one wins from a twitter fight.

What it does

TwitContext replies to these arguments with a short 30-60 second video that is designed to educate and demonstrate common ground.

How we built it

  1. Using the twitter API, we extract a series of threads pertaining to an argument.
  2. Using the tweets and user data, we feed gpt-4 with a prompt instructing it to write a video transcript, acting as an "expert mediator".
  3. We then feed this back into GPT and ask it to segment the transcript and to produce a stock footage query that represents the segment.
  4. Using FFMPEG for video and subtitle editing, Elven Labs for voice generation, and Serp for our forced alignment.
  5. After it's all put together, we use the twitter api again to tweet the video to the top-level tweet

Challenges we ran into

  • FFMPEG is terrible to work with
  • The Twitter API is even more terrible to work with
  • Prompt engineering ended up being far more significant in our success than we had previously assumed.

Accomplishments that we're proud of

Given only a tweet id, we can automatically create an educational video that effectively describes both sides of a complex issue in a matter of minutes.

While each generation differs slightly from the last, every single video generated makes sense and accomplishes the goals we set out to complete.

What we learned

Large models are incredibly capable, but given just simple tools to sit on top of them, we can build absolutely inspiring projects.

What's next for TwitContext

Host on Railway and open-source!

Built With

Share this project:

Updates