Detecting hate speech in video comments is a formidable challenge, but the difficulty increases exponentially in the context of livestreams.
The rapid pace and sheer volume of livestream comments can overwhelm human moderators, and current automated technologies (which often rely on scanning comments for banned words) are easily deceived.
This results in harmful comments not being addressed promptly, allowing hate speech to infiltrate the viewing experience of potentially thousands of live audience members. Such exposure not only harms those targeted, but also contributes to a toxic online environment, adversely affecting the mental health of viewers and streamers alike.

To address these issues, I developed Hate Speech Eliminator, a tool designed specifically for TikTok Live.
It leverages Artificial Intelligence and real-time livestream context analysis to quickly and accurately detect hate speech. This approach significantly enhances detection accuracy compared to other solutions, enabling it to identify and address hate speech almost instantaneously—a capability that far surpasses existing solutions that only analyse text in isolation.
Hate Speech Eliminator is capable of detecting hate speech that other solutions miss, even subtle hate speech that normally flies under-the-radar with text-only analysis.
It's designed to be incredibly simple to use. On the home screen, simply enter the username of someone who is currently livestreamining, then click 'Start'. Hate Speech Eliminator will begin actively monitoring the livestream comments for potential hate speech.

Now you can relax! If any potential hate speech is detected, an alert will appear that allows you to see more information and take immediate action.

Hate Speech Eliminator relies on on AI context analysis to quickly and accurately detect hate speech that would otherwise be missed by traditional hate speech detection models.
Every 30 seconds, it takes a snapshot of the livestream and utilises AI to accurately describe what's happening in it. This context - which could include who's in the livestream, what they're doing, and their environment - is crucial for effectively identifying hate speech that other solutions miss.
When a new comment is received on the livestream, the current context is passed to the AI model, which then analyses the comment for hate speech while also considering the broader context of the livestream. This method radically improves hate speech detection.

Consider the comment "this is gross", it might not typically be flagged as hate speech by traditional hate speech detectors. However, Hate Speech Eliminator, recognising the context of a wedding livestream, identified it as likely targeting an ethnic minority.
Other examples include comments that were flagged for targeting a fitness influencer's disability and an influencer's cultural heritage, illustrating the sophisticated nature of this tool in understanding nuanced instances of hate speech.

These examples, while relatively mild, show the capability of Hate Speech Eliminator to catch more severe instances of hate speech that have not been included in this overview. This technology not only detects what other solutions miss but also promises a safer online environment for livestream viewers.
I believe Hate Speech Eliminator is an excellent example of using AI in an ethical, safe, and responsible way to make online interactions healthier. It excels because it uses AI to understand the context in which comments are made during livestreams, and doesn't just scan the words to determine in something harmful is being said. This is something that humans also excel at, but has been out of reach of traditional hate speech detection models.
Hate Speech Eliminator ensures user privacy by not storing any video data. Once AI processing is complete, all snapshots and contextual files are removed. Although it uses an external AI service to process snapshots and comments, the returned data is displayed but never stored. This method ensures that Hate Speech Eliminator can effectively protect online communities from hate speech, while also maintaining individual privacy.
Hate Speech Eliminator was developed using Python, and the source code is available via the HateSpeechEliminator GitHub Repo. Additional technical details are available in this doc.
- Hate Speech Eliminator was developed using Python.
- The TikTok Display API is used to retrieve user info.
- The TikTok Live API is used to retrieve livestream comments.
- Playwright is used to retrieve snapshots.
- It uses the OpenAI API for AI queries and context analysis.
- The Gooey and Windows-Toasts libraries were used to create the UI.

Hate Speech Eliminator can be downloaded and used right now - just enter your own AI API key under the advanced settings tab.
In the future, I hope to expand Hate Speech Eliminator to monitor hate speech across all videos, not just livestreams.
Thank you for your interest in making online spaces safer. Please feel free to leave any questions below. Thanks!
Built With
- python
- tiktok-api

Log in or sign up for Devpost to join the conversation.