Inspiration
The inspiration for this project started with a certain technique used by news stations to have their content viewed more, and with more engagement. News stations, youtube shorts, TikTok, Instagram reels and many more video sharing platforms use emotionally engaging and negative/distressing content to increase user engagement. This is emotionally and mentally draining for the user, and is more prevalent to people today than ever before. Since these platforms are taking a rise, and people with low mental health are more susceptible to falling into this trap, we found making a solution to this problem to be the best idea for our project.
What it does
The PosiTube chrome extension filters out negative content on youtube shorts as you scroll using a popup that lets you chose whether to keep playing, or to skip a video that has been flagged. There is an adjustable tolerance that you can choose, so that the extension is personalized for each person's requirements.
How we built it
Large Language Model First we built our own custom large language model using a dataset of tweets that is correlated to the type of titles and captioning youtube shorts videos have, in order to have a sentiment detection system of the videos words and title.
extension Then we created a chrome extension that can communicate with the webpage the youtube shorts are viewed on. We use data we get from the extension, including titles, and the captions for the video to run into the LLM.
server Then, we set up a server to communicate what the front end sees into the backend to process into whether there is a detected negative content on the video, and how negative it is, so that we can trigger a popup based off of the tolerance specified.
popup If there was a detected negativity that was passed the specified tolerance, a popup is triggered. The popup informs the user that there is negative content, and allows the user to make the choice of skipping the video, or to watch anyway. If the user skips, the extension automatically plays the next youtube short.
Challenges we ran into
Team members When we first started to assemble the team, there were three members. One of them was there for the first few days of the project, but then ghosted the team. This broke apart the structure and work allotments we had previously planned out, and put us back at square one with only around 8 days left to complete the project. Two more members were added and brought up to speed, and the project took a slow start, with all the members having to put in many extra hours to make up for this.
hardware limitations Since we trained our own Large Language Model, we needed powerful hardware to train it in time with enough accuracy to properly predict the sentiment of the youtube text. No one in the team had hardware with enough power, so we contacted universities asking to use their resources, and eventually resolved in using google collab.
software issues In order to program the extension, the team was using a MacBook, a Chromebook, and windows machines. This created discrepancies in the code, as there would be problems arising that would be very specific to some software while working on others.
new coding concepts Before starting the project, no one in the team had much or any experience in making artificial intelligence models, or chrome extensions. This made the project a huge learning experience with many hours of work going into figuring out how a coding concept we were trying to integrate actually works before making it pertain to our extension.
youtube text sentiment The titles and captions in youtube shorts aren't very similar to how people normally talk. This added an extra layer of complexity in forming the LLM, and made finding an appropriate database to train the sentiment analysis difficult to find. We ended up using a tweet database containing 1.6 million tweets with either a positive or negative sentiment associated with them.
Accomplishments that we're proud of
speed in result returned to extension We are proud of the extension being able to detect the content on the youtube short, send it over to the server, compute it's negativity and return whether it is okay for the user within ~0. 6 seconds on average. This is barely noticeable when scrolling casually on youtube shorts.
accuracy in detecting negative content Though our tests, when looking at the average youtube shorts feed, the amount of negative content detected by our extension is ~92% when kept at a relatively high tolerance, and a staggering ~98.5% when kept at a high tolerance. If the model was trained for longer, with better hardware, we are confident that number could have been even higher.
custom UI All the UI elements in our project are unique and custom made. They were expertly crafted, and are well suited to the use of the extension.
What we learned
AI skills No one in our team had much experience with AI before this project, creating our own custom one, and troubleshooting through learning AI as well as figuring out the ins and outs of it was a massive and well paid off learning experience.
JavaScript web interactions We used JavaScript for interacting with youtube shorts, scrolling to the next video, getting captions and title. We learned to do this in an optimized manner, and send the information captured appropriately.
Chrome extension creation We created a custom chrome web extension for this project, and integrated all of our front and backend through it.
What's next for PosiTube
PosiTube only works for youtube shorts for now, in the future we see it working for other popular platforms like TikTok and Instagram reels. We also see it being deployed into a mobile app, so that it can function outside of a browser as well.

Log in or sign up for Devpost to join the conversation.