Inspiration

The project was inspired by the need for automated solutions to improve accessibility in public parking areas. Vehicles with the International Symbol of Access (ISA) need clear identification to ensure fair use of accessible parking spaces. This solution aims to help in automating the detection of these vehicles in real-time, benefiting city planners, enforcement agencies, and people with disabilities by preventing misuse of accessible parking spaces.

What it does

The system detects vehicles that have an ISA placard using computer vision and deep learning. By analyzing video streams, it draws bounding boxes around vehicles containing the ISA symbol and provides real-time detection. This technology could be integrated into surveillance systems or parking enforcement applications to improve accessibility monitoring.

How we built it

We built the system using the YOLOv11 (You Only Look Once) model, fine-tuning it on a custom dataset containing images of vehicles with ISA placards. We used the Ultralytics YOLO library for model training and OpenCV for real-time video stream processing. We optimized the model to ensure fast inference times and high accuracy. The system was deployed on an AWS EC2 instance. This EC2 instance is being managed by Terraform, automating the infrastructure setup and making the solution scalable.

Challenges we ran into

One major challenge was ensuring that the model detects the entire vehicle instead of just the ISA placard. Additionally, balancing accuracy with speed for real-time video processing was difficult, as we needed to maintain performance under 33ms per frame. Collecting and annotating a sufficiently large and diverse dataset was also a significant challenge.

Accomplishments that we're proud of

We successfully trained a YOLOv11 model capable of detecting accessible vehicles with ISA placards in real time and deployed the model on an EC2 instance and the infrastructure is managed by Terraform. We also optimized the system to process video streams without significant frame drops, achieving a good balance between speed and accuracy. Moreover, we designed a solution that could be scalable and adaptable to real-world parking enforcement applications.

What we learned

Throughout the project, we gained valuable insights into real-time computer vision systems, particularly in the realm of object detection. We learned how to fine-tune YOLO models for specific applications, how to optimize for both performance and accuracy, and the importance of a robust, diverse dataset for training.

What's next for Techie Trove

The next steps include expanding the dataset to cover more diverse vehicle types and lighting conditions to improve model generalization. We also plan to integrate the system with real-world parking enforcement systems and explore the potential of adding other accessibility features, such as identifying parking violations or mapping accessible parking spaces in cities for better accessibility management.

Built With

Share this project:

Updates