A major roadblock to fully autonomous driving becoming a mainstream technology is its reliability in nighttime and low light scenarios. Current leaders of the autonomous movement have gravitated towards camera and computer vision-based models to perform the bulk of their autonomous control. However, these systems are currently limited by the hardware’s ability to give detailed image data to these models in low environments with low light and motion blurring. To that end, we aim to develop a suite of computer vision models using event-based cameras (in tandem with conventional cameras), which have significantly better performance in fast-paced environments because of the cameras’ increased light sensitivity and variable pixel refresh rate. Our goal is to be able to adapt current mapping and planning algorithms for event-based cameras on an F1Tenth RC car with an asynchronous paradigm.
Built With
- c++
- f1tenth
- machine-learning
- python
- slam
Log in or sign up for Devpost to join the conversation.