Inspiration
Our teammate was tired of counting his points when playing darts and wanted a system to break the dartboard into regions to determine where the dart landed.
What it does
Segment an arbitrary image into clickable, interactive regions
Why this matters
Our application allows users to explicitly assign values to intuitive, distinct areas of an image
How we built it
We used OpenCV to divide an image into regions and a browser-based UI to to select point values for each of the regions.
Challenges we ran into
- Smoothing out noise and creating distinct regions from sometimes overlapping contours
- Installing dependencies many would take for granted (such as OpenCV and Matplotlib) on a Raspberry Pi
- Investing significant time in integrating TensorFlow's Object Detection API only to realize it added unnecessary complexity and computation time
Accomplishments we're proud of
- Creating clickable regions from an arbitrary image
- Delivering a small-footprint user interface by using minimal JavaScript and PHP
- Converting contour data generated by OpenCV to interactive SVGs for the web UI
- Running the application entirely on the Raspberry Pi (image capturing and processing, HTTP web interface , etc.)
- Delivering results by using simpler technologies as appropriate (i.e. not using machine learning when edge detection suffices)
What we learned
- Edge Detection
- HTTP Serving
- Dynamic generation of SVGs
What's next for Rift
- Apply segmentation to imagery beyond dartboards
- aerial photographs
- housing floor plans
- retail clothing racks
- company and personal storage
- biological
- Optical character recognition to derive textual data (like dartboard point values) for the regions
Log in or sign up for Devpost to join the conversation.