Inspiration
3 of our team members cannot operate an automobile.
Driving is a crucial part of our daily lives. Bad driving habits, not following road rules or speeding are often the cause of car accidents that occur. Meanwhile, the G2 and G road tests ensure that the road is shared among drivers proven competent to drive to improve the overall safety of individuals. As a measure of driving safety, driving tests becomes more and more important; however, more than 40% of the increasingly growing drive test-taking population (~33,000/month) fail the G2 drive test, which indicates two major problems: 1) the lack of driving skills and 2) the inability to target what's on the test to pass the requirements. Therefore, our focus of this hackathon is to make driver's training more accessible, easy, and intuitive.
What it does
We delivers a smart driving assistant system that picks up traits often ignored by humans to improve driving quality and strictly judges the trainee's driving based on the testing criteria. The system has two integral parts: the image recognition feed based on a front-facing camera used to recognize road signs and a macro-monitor system that uses HyperTrack to track the car’s movements and parameters. The image recognition system ensures that the trainee obey all street rules and signs; in the case that they don’t – for example, if they don’t wait enough time at a stop sign or run a yellow light, the system will tally these feedback to the trainee and after a session of practicing. The macro-monitor system, on the other hand, uses HyperTrack’s system to track the position, pace, and parameters of the car, such as speed and acceleration, to detect bad driving habits like speeding or jerky driving. The system also gives audio cues to the trainee to better support them with information as they conduct in different circumstances.
How we built it
The frontend is a web app built with React Native, which calls the Google Maps API and HyperTrack API to handle issues related to speed and slowing down or turning suddenly. The front-facing camera will face the road ahead and will capture images that will be sent to a machine learning model, which detects traffic signs and traffic lights. Using that information, the front-end will handle the logic for all rules related to traffic signs and traffic lights.
Challenges we ran into
We encountered a number of challenges in our process of creating this hack. The biggest one was to appropriately train a machine learning image recognition model that satisfies our needs: although there are many pre-existing models, they do not fit our needs because either they're not accurate enough for our purposes of recognizing an image from a far distance or because the language they're written in is not compatible with our choice of front-end language. We ended up resolving this challenge by choosing a lesser-known tool named Clarifai, which had a steep learning curve, but we were able to make it work, and it drastically reduced our integration workload and difficulty.
Accomplishments that we're proud of
Many of our members are attending Hack the North for the first time. We are really proud of finishing this project with most of the features we planned for.
What we learned
We learned how the computer vision object detection problem is solved mathematically. We also learned the pros and cons of different front and back-end interfaces and the compromises to make between them.
What's next for DrivePal
Obviously, there are many more rules than the ones we currently highlight in our system, which makes it essential to increase the capacity of the image detector to detect different signs, know how to act around them, and provide effective training to students. Currently, our system cannot detect what the driver is doing or what is behind the car so a back-facing camera would bring about further improvements. Looking forward to monetizing this in the future.
Built With
- clarifai
- django
- hypertrack
- keras
- python
- react-native
Log in or sign up for Devpost to join the conversation.