Inspiration

ReadAble was inspired by the everyday challenges faced by people with dyslexia, visual impairments, and reading fatigue when interacting with printed text. Many existing solutions rely on cloud services, require internet access, or compromise user privacy. I wanted to create a tool that works entirely on-device, respects user data, and focuses on readability and accessibility first.

What it does

ReadAble allows users to capture or upload images of text and instantly convert them into readable, accessible content. The app performs on-device OCR, processes the extracted text for better structure and readability, and reads it aloud using text-to-speech. Users can customize the reading experience with dyslexia-friendly fonts, focus modes, and sentence-by-sentence playback, making reading more comfortable and inclusive.

How we built it

I built ReadAble using React Native with Expo to support iOS, Android, and web from a single codebase. On-device OCR is handled using ML Kit on Android and Apple’s Vision framework on iOS, accessed through native bridges. The app processes text locally using custom NLP logic for sentence splitting, keyword scoring, and structure detection. Text-to-speech is powered by Expo Speech, and all data is stored locally using AsyncStorage to maintain an offline-first architecture.

Challenges we ran into

One of the main challenges was implementing reliable OCR across platforms while keeping all processing on-device. Managing differences between Android and iOS OCR APIs required careful abstraction. Another challenge was balancing performance with accessibility features such as real-time text processing and speech playback. Ensuring a smooth, responsive experience without relying on cloud services required thoughtful optimization and testing.

Accomplishments that we're proud of

i am proud of building a fully offline, privacy-respecting accessibility tool that works across platforms. Successfully integrating platform-specific OCR while maintaining a clean shared codebase was a major achievement. I am also proud of designing an interface that prioritizes dyslexia-friendly reading through font choice, focus modes, and customizable playback.

What we learned

Through this project, i learned how to design and implement an offline-first mobile application with native integrations. I gained experience working with OCR pipelines, accessibility-focused UI design, and cross-platform development using Expo. I also learned the importance of simplifying complex text into a more readable form for real users.

What's next for ReadAble

Next, I plan to improve text analysis with smarter document structuring and summarization. I also want to add optional cloud-based features, such as advanced language models, strictly as opt-in enhancements. Additional accessibility options, better document management, and expanded language support are also planned to make ReadAble even more inclusive.

Built With

Share this project:

Updates