Inspiration:
The inspiration behind our project was a simple but powerful idea: inclusivity should be the standard, not the exception. In everyday life, accessing buildings and spaces can be a daunting challenge for people with disabilities. Whether it's a lack of accessible ramps, difficulty locating elevators, or the absence of automated doors, we noticed that basic accessibility information is often missing or hard to find. We wanted to make this process easier and empower those who face these challenges daily.
We were also inspired by how technology can create solutions for everyone. The idea of building an app that supports accessibility was not only exciting but meaningful—it became our way to contribute to a more inclusive world. This drive for inclusivity, combined with our fascination with technology, became the foundation of our project.
What it does:
EaseAccess is all about making the world more accessible for people with disabilities. Here are the main features:
Building Accessibility Information: Users can input the name of a building to instantly get detailed information on accessibility features—like the number of elevators, ramps, and automated doors available. This makes it easier for users to plan their visits and navigate buildings with confidence.
University Navigation for Accessibility: Users can also enter the name of a university, and the app will provide them with the most accessible building on that campus. This feature helps students, staff, and visitors with disabilities quickly find the best spaces for their needs.
Color Detection for Color-Blind Users: Our app has a dedicated feature for color-blind individuals. By uploading an image, the app detects and displays the dominant color, helping users better interpret visual content and understand color differences.
How we built it:
Planning and Research We started by brainstorming the key features: building accessibility information and color detection for color-blind users. We researched what data is necessary to help users with disabilities and identified the primary accessibility indicators (ramps, elevators, automated doors). We planned the tech stack and chose Flask for the backend and HTML/CSS/JavaScript for the front-end interface.
Development Data Integration: We built a mock knowledge base of building accessibility data for testing. For the color detection feature, we used the ColorThief library to extract dominant colors.
Gemini API: We used the Gemini API to generate accessibility details when the specific building or university was not in our database. This allowed us to provide consistent information even when data was limited.
Backend: We developed a Flask-based backend to handle data processing and communicate with the front-end. We also implemented routes to manage building queries and image uploads.
Frontend: We created a user-friendly interface where users could easily search for building accessibility details or upload an image for color detection. The frontend communicated with the backend via JSON API requests.
- Testing and Deployment We tested the app using various building names and images to ensure the color detection feature worked accurately. We also validated that the accessibility data was consistent and reliable. We adjusted the interface to make it intuitive and easy to navigate for people with different needs.
Challenges we ran into:
1.. Data Availability:
One major challenge was the lack of detailed accessibility data for many buildings and universities. To overcome this, we used a mock knowledge base for testing and implemented the Gemini API to fill in gaps when data was unavailable. This allowed us to provide meaningful responses even with limited data.
- Handling API Responses:
Integrating with the Gemini API presented challenges, especially when handling unexpected or incomplete data from API responses. We had to build robust error handling to ensure that users always received clear feedback, even if the data was missing.
- Color Blindness Feature:
Developing the color detection feature was technically challenging. Understanding how to represent colors in a way that's accessible to people with color blindness required careful thought. We focused on dominant color detection and providing easily interpretable color names.
Accomplishments that we're proud of:
Creating an Inclusive Solution: We're proud to have built a tool that focuses on inclusivity and accessibility, addressing real challenges faced by millions. It's our way of contributing to a more equitable and accessible world.
Seamless Integration with Modern Technology: Integrating the Gemini API to fill gaps in accessibility information and using ColorThief for dominant color detection showcased our technical capabilities and allowed us to build a smarter, data-driven app.
Developing for a Diverse Audience: We managed to address multiple accessibility needs—navigating spaces for those with mobility challenges and supporting color-blind users. This diversity in our feature set shows our commitment to serving a wide audience.
User-Friendly and Intuitive Design: Designing a simple and intuitive interface for people of all abilities was a major achievement. We made sure that the app is accessible itself, with clear navigation, good contrast, and compatibility with assistive technologies.
Providing Accurate and Valuable Information: We’re proud that our app can provide reliable accessibility information, whether from our knowledge base or through AI-generated insights when data is missing. This combination of sources ensures that users always get helpful feedback.
What we learned:
During this project, we learned a lot about both technical development and accessibility needs:
Accessibility Standards: We dove deep into accessibility guidelines, learning what makes a space accessible for people with disabilities. This included understanding how ramps, elevators, and automatic doors play a crucial role in accessible navigation.
Technical Skills: We enhanced our skills in web development, API integration, and image processing. Working with the Gemini API, handling JSON data, and integrating front-end components sharpened our coding knowledge.
Color Accessibility: We also explored accessibility for people with color blindness, learning how color perception works and how technology can aid those who experience color differently. Implementing a feature to detect dominant colors in images helped us think about the practical implications of visual accessibility.
What's next for EaseAccess:
As we look to the future, we’re excited about how we can continue to evolve and expand EaseAccess. Here are some of the next steps we’re planning:
- Expanding the Knowledge Base:
We aim to grow our database to include a wider range of buildings, universities, and public spaces. Partnering with institutions and accessibility organizations will help us gather more accurate and comprehensive data. We plan to add more detailed accessibility features, like information about sensory-friendly rooms, braille signs, tactile maps, and height-adjustable facilities.
- Support for More Accessibility Needs:
We’re planning to expand our features to include more accessibility tools, like text-to-speech for the visually impaired, sign language guides for deaf users, and easy-to-read formats for those with cognitive disabilities. Enhancing the app's ability to detect color contrasts and suggest adjustments for color-blind users will provide a deeper level of support in visual accessibility.
- Enhanced AI Capabilities:
We’re exploring the use of AI to not only fill gaps in our current data but to provide real-time suggestions and updates. For example, if accessibility features change or buildings undergo renovations, our AI system could provide instant updates to users. We plan to integrate AI-driven voice assistance, allowing users to interact with the app using voice commands, making it even more accessible.
Log in or sign up for Devpost to join the conversation.