Inspiration
We thought of this project because most of us had something in common. We had elderly family members, like grandparents, who had taken hard falls and suffered from them.
I (Arjun)kept having a thought pop into my head: what if nobody was there?
At first, I thought of cameras. But knowing my grandparents and many other people, both elderly and non-elderly, they wouldn't want it - even if they were autonomously monitored.
But then, we found a research paper on pinpointing people via WiFi, and that's how FallGuy was born. FallGuy is quite literally your FallGuy. If you ever fall and are rendered unconscious, or if you have no contact with emergency services, we can detect that.
The beauty of it is that WiFi is such a large untapped commodity. It exists literally everywhere, with nearly a hundred access points in this building alone.
What it does
Rather than Wifi (TAMU's wifi doesn't let us obtain CSI data, and neither do our laptops and phones), we decided to measure RSSI (Received Signal Strength Indicator). Using signal strength information and multiple access points, we can detect when an individual falls. This is extremely valuable, especially in environments where people might not be around - offices late at night, houses, and even outdoors in some populated areas.
How we built it
We built FallGuy using a multi-device Bluetooth Low Energy (BLE) mesh network that continuously monitors RSSI values between devices.
Each device simultaneously advertises and scans for other devices, creating a mesh network, while one designated device (the "queen" node) aggregates RSSI data from all other devices every 10ms
Values are then logged and analyzed to create a signal strength matrix showing relationships between all devices, and all data is logged to CSV files for post-processing and machine learning model training.
The model (an LSTM in TensorFlow) achieves a 93% accuracy in detecting falls. We suspect this will improve far more with more test cases and more time to label data.
The data was processed by lining up video footage of nearly 100 falls and our actual RSSI data.
More features we added:
- 10ms sampling rate for high-resolution temporal data
- Automatic device discovery and connection management
- P2P data sharing, where each device sends its RSSI readings to the central collector
Challenges we ran into
- WiFi CSI Data Limitations: Our original plan was to use WiFi Channel State Information (CSI), which provides much richer signal data. It would allow us to get very precise voxels of an individuals position, and possibly even create a visual of their location. However, we discovered that TAMU's enterprise WiFi network doesn't expose CSI data, and consumer devices (iPhones, MacBooks) don't provide API access to CSI.
This forced us to pivot to Bluetooth RSSI, but:
- RSSI only gives us signal strength. We weren't sure this approach would work until the very last minute, so we actually worked on a different project concurrently as a backup. Given the time limit, this proved a bad idea; however, we were still able to collect data and train a model.
- We weren't able to make the 3D visualization we initially imagined. Still, we can detect falls using a neural network that finds a correlation in the data.
- We didn't have data - we actually had to make our own dataset. We made a swift app that was loaded onto all our devices so they would act as access points, and we fell repeatedly, dozens of times, until we had enough.
Accomplishments that we're proud of
-We are proud of the fact that we were able to transfer RSSI into usable data. -This was a difficult task as there were many restrictions when working in the mobile runtime environment to capture data. -We were able to capture extensive data from a simulated mesh using 3 devices and properly tag this data for training. Additionally, we were able to train an effective model with limited data. -We were able to complete this fall detector using RSSI, instead of CSI.
What we learned
-We learned more about RSSI/CSI and how Bluetooth works. -We learned about how to train data in a limited circumstance. -We learned more about how to collect Bluetooth and other covered data in a mobile environment. -We learned about how to triangulate motions via Bluetooth signals. -We learned more about how we would integrate Gemini/VLMs into video processing.
What's next for FallGuy
-Switching to CSI would be a major change for the future. This would give us robust data that we could pull from for our fall detector, but would require much greater access to a router network. -We will fully integrate Gemini, offering greater accuracy for reading. -We will also containerize and package our system, enabling remote viewers to watch and review information about the condition of a user.
Log in or sign up for Devpost to join the conversation.