Inspiration
Accessibility in the real world is painfully inconsistent. Ramps, elevators, doors, signage, you don’t know what you’ll face until you’re already there. People with disabilities often rely on crowdsourced reviews or outdated data and even then most places simply aren’t documented.
We wanted to build something that helps people before they’re stuck outside a building wondering if they can even get in. The idea was simple: What if anyone could quickly capture an accessibility feature and have AI verify it on the spot? That spark became AccessibilityPlus.
What it does
AccessibilityPlus lets anyone submit a verified accessibility report using just a phone. Users take a photo, allow the site to grab their current coordinates, and upload a short report. Our AI model analyzes the image, identifies whether the accessibility feature is present (like a ramp, handicap parking, or elevator access), and scores its confidence.
All reports appear on a live map so people can immediately see real, user-generated accessibility data around them.
How we built it
We built a full stack pipeline in under 48 hours:
Frontend: A fast, clean mobile-first interface built with HTML/CSS/JS. It captures geolocation, handles file uploads, and displays status updates in real time.
Backend: A Python + FastAPI service running on AWS Elastic Beanstalk to handle uploads and process reports.
AI model: Computer vision using OpenAI’s image capabilities to detect accessibility features and return confidence scores.
Storage: Reports and uploaded images are saved in S3 buckets, with metadata stored for retrieval.
Deployment: CloudFront + Route53 + SSL for a fully custom domain and global CDN. Everything works together so a user can take a photo → send it → get an AI-verified result instantly.
Challenges we ran into
Geolocation reliability: Browsers can be extremely picky with HTTPS, permissions, and timing. Getting stable behavior took a lot of iteration.
File upload timeouts: Our first deployment hit 504 errors because the load balancer timeout wasn’t aligned with the model’s processing time.
DNS + CloudFront: Routing a brand-new domain through Route53, CloudFront, and an Elastic Beanstalk load balancer under hackathon pressure was… an adventure.
AI response parsing: The model sometimes returned text instead of structured JSON, so we had to build fallback handlers.
Accomplishments that we're proud of
We shipped an entire end-to-end AI-powered accessibility tool in basically a day.
Custom domain, HTTPS, CDN, and cloud deployment all working in production.
Real-time accessibility detection with confidence scoring.
A working live map driven by actual user reports.
The whole experience feels simple and friendly, even though the backend is juggling a lot
What we learned
AI is insanely powerful when paired with real-world context like GPS and images.
Deployment always takes longer than writing the actual code.
Designing for accessibility means thinking about reliability, clarity, and speed, not just features.
Hackathons reward doing one thing really well: in our case, capturing + verifying accessibility with minimal friction.
What's next for 3OIX - Superpower - AccessibilityPlus
Support more accessibility categories (bathroom access, braille signs, curb cuts).
Build a public API so city planners, nonprofits, and apps can use the data.
Add user accounts and badges to encourage community contributions.
Push notifications when new accessibility reports appear near you.
Offline “store now, upload later” mode for places without service.
Built With
- amazon-web-services
- api
- beanstalk
- browser
- certificate
- cloudfront
- css3
- elastic
- fastapi
- fetch
- gps
- html5
- leaflet.js
- manager
- nginx
- openai
- python)
- route53
- s3
- vision



Log in or sign up for Devpost to join the conversation.