Inspiration
Large university campuses are difficult environments to monitor in real time. While tools like blue-light emergency stations and campus security patrols exist, they are often limited by coverage and response time. Our team began asking whether emerging technologies such as drones and computer vision could help improve situational awareness in areas where traditional safety infrastructure may be sparse.
At the same time, advances in machine learning and autonomous systems have made it possible to analyze visual information in real time. We were inspired by the idea of combining aerial robotics with computer vision to explore how technology could provide an additional layer of awareness for communities.
This led to the creation of AEGIS.
What it does
AEGIS (Aerial Escort and Guardian Intelligence System) is a drone-assisted safety monitoring platform designed to improve situational awareness in large environments such as university campuses.
The system uses a drone equipped with a camera to capture live video feeds. These feeds are processed using computer vision and machine learning to detect human presence and analyze movement patterns. When unusual or potentially unsafe activity is detected, alerts can be relayed through a mobile interface and administrative dashboard.
The goal of AEGIS is not to replace existing safety systems, but to supplement them by providing an additional mobile vantage point that can help identify situations earlier.
How we built it
The AEGIS prototype combines drone hardware, computer vision models, and a lightweight software stack.
The drone platform used was the DJI Tello, which allowed us to access camera feeds and telemetry through the Tello SDK. Using Python and the djitellopy library, we established a communication layer that enabled flight control and video streaming.
For perception and analysis, we integrated several computer vision frameworks including: • OpenCV for real-time image processing • MediaPipe for human pose estimation • TensorFlow for machine learning support
The processing pipeline analyzes incoming frames from the drone camera to detect human figures and track movement patterns.
To visualize and manage system data, we built a cross-platform desktop interface using: • Electron • Vite • Vanilla JavaScript, HTML, and CSS • Mapbox GL JS for spatial visualization
A Flask backend server acts as a communication bridge between the drone telemetry, computer vision pipeline, and user interface.
This architecture allowed us to combine aerial robotics, machine learning, and real-time monitoring into a single prototype system.
Challenges we ran into
Building a real-time drone monitoring system presented several technical challenges.
One major challenge was working with the limited telemetry capabilities of the DJI Tello drone. The drone does not provide absolute X/Y positional data, so we had to estimate position using velocity integration over time.
Another challenge involved real-time video processing performance. Running computer vision models while maintaining smooth drone control required careful management of asynchronous processes and multi-threaded communication.
Computer vision also introduces inherent limitations. While human detection models are effective at identifying people, determining intent or distinguishing between harmless and dangerous behavior remains a difficult problem.
Additionally, integrating multiple components like drone control, perception models, and the user interface required careful coordination across different programming environments.
Accomplishments that we're proud of
We are proud that we were able to successfully integrate a working drone system with a real-time computer vision pipeline within the limited time frame of the hackathon.
Our team built a prototype that demonstrates how aerial robotics, machine learning, and modern web technologies can be combined into a unified safety monitoring platform.
We also placed a strong emphasis on ethical design considerations, ensuring that our system focuses on situational awareness rather than identity tracking.
Finally, we are proud of the interdisciplinary nature of the project, combining elements of robotics, software engineering, and artificial intelligence into a single prototype.
What we learned
Through this project we gained hands-on experience working with several complex systems simultaneously, including drone control APIs, real-time video processing, and machine learning frameworks.
We learned how challenging it can be to coordinate hardware systems with software pipelines, especially when working with real-time data streams.
We also gained a deeper appreciation for the ethical considerations surrounding surveillance technologies and the importance of designing systems that prioritize privacy and responsible use.
Perhaps most importantly, we learned how quickly a prototype can evolve when a team collaborates across multiple disciplines.
What's next for Ralphly
While AEGIS is currently a prototype, there are many directions for future development.
Future improvements could include: • more advanced behavior analysis models • improved drone navigation and autonomous flight capabilities • integration with existing campus safety infrastructure • development of a full mobile safety application • deployment of multiple coordinated drones for broader coverage
We also plan to explore stronger privacy protections and governance frameworks to ensure systems like AEGIS can be deployed responsibly.
Our long-term vision for Ralphly is to continue exploring how emerging technologies like robotics and machine learning can be used to support safer and more aware communities.
Log in or sign up for Devpost to join the conversation.