Inspiration
We were inspired by a former classmate of mine, who suffered from epileptic seizures. In our high school Chemistry class, we watched a video without realizing that flashing lights were about to appear on the screen. A few seconds later, as the brightness bombarded us, he was thrown into an epileptic seizure and began to convulse on the floor. I shouted for the teacher and ran through the hallways seeking help until I reached the school nurse, gasping for air. I was shocked and appalled that videos like that do not come with pre-warning systems, and our team was motivated to help prevent incidents like these in the future. We want to alert every time flashing appears, and give the user more control of their video consumption, and their lives.
What it does
FlashAlert buffers a user’s video and watches it with a few seconds of lead to identify and communicate to the user possible epileptic content in a video that they are watching. It does this through the OpenCV library in Python and applies this to Chrome’s extension platform to create a safer online viewing environment for people who suffer from photosensitive epilepsy.
How we built it
In this project, we developed the back-end entirely with Python, Flask, and OpenCV, built the front-end in Javascript along with Fetch, and created the connection for the server with WebSocket. We chose to use WebSocket instead of REST APIs because it allowed us a low latency, bidirectional connection that we could use to transmit multiple instances of data back and forth.
For the back-end, we converted everything to Grayscale to filter our RGB values and leave us with Intensity values. Then, we ran some statistical analysis on the standard deviations to return the timestamps that the flashes occurred.
For the front-end, we utilized the Developer Tools in Google Chrome Extensions to create a workable and intuitive UI. Using WebSocket, we could “talk” back and forth from the video to our program and generate a list of timestamps of flashes where possible seizures could occur.
Challenges we ran into
Many challenges were faced throughout this project, starting with finding effective testing videos and repeatedly checking if a written script would identify possible epileptic clips. Finding an assortment of parameters for the script to identify a strobe on screen was something that took many tries. The premise of this was based on a grayscale of the original video to view the entirety of the video as an assortment of brightness on a single color. Our method for a suitable script was creating parameters to measure changes in this Grayscale intensity. Once a suitable script was made, this script had to send time stamps of where it believed these epileptic moments were present to the front-end Google Chrome Extension. The issue with this was that the script often found brief moments that were subdivided by fractions of a second and that would have to be organized by the back-end to send a single long time stamp rather than an assortment of subsecond time stamps to the front-end.
Accomplishments that we're proud of
We were successful in getting a working project to correctly analyze a video for strobing lights and return timestamps that would warn the user of potentially hazardous sections. While other attempts at video filtering use transcript data to inform their back-end systems, we directly analyzed the video frames to make our system sensitive to harsh light changes and more inclusive for videos without transcripts. With this back-end, we were able to compare video frames for contrast and grab time stamps for front-end use. On the front-end, we were able to develop a Google Chrome Extension to take in the YouTube video and return notifications for the user to be aware of strobing light warnings coming up in the video before they appear. We were successful in the basics of our goal and hope to help light-sensitive people have more control over their video-watching.
What we learned
From this experience, our team learned a lot about the power of communication and teamwork to create thoughtful products. We strived to collaborate efficiently and kindly, and we were able to quickly get an idea together and figure out GitHub Live Share to work on code simultaneously! As this was many of our team members' first Hackathon, we gained experience in many developer tools and common languages through a real-world project. On the technical side, we explored creating a unique Google Chrome Extension as a method to interpret user interaction and deliver output messages. Furthermore, we learned to use WebSocket and make the most of its function as a bidirectional API to deliver multiple real-time outputs with the video analysis. Behind the scenes, we discovered how to analyze video frames and develop criteria for determining strobing lights. Overall, we grew profoundly in skill and character from this experience.
What's next for FlashAlert
Our next steps for FlashAlert would be to refine our video analysis to be more aware of smaller pixel sections that are strobing and other colors that have overall less contrast. On the front-end, we would also hope to integrate our timestamp data to provide not only warnings but also initiate pausing or changes in the video to further protect the user.
Built With
- fetch
- flask
- javascript
- opencv
- python
- vim
- websocket
Log in or sign up for Devpost to join the conversation.