Inspiration
Many communication systems assume that speech must be fluent, immediate, and perfectly articulated to be valid. For people with speech impairments, such as aphasia, apraxia of speech, post-stroke language difficulties, or severe stuttering, this assumption creates a quiet but powerful form of systemic bias.
In stressful or high-stakes situations, people often know exactly what they want to say but cannot retrieve or articulate words reliably. Existing solutions either rely on slow manual tapping, expect fluent speech input, or let AI “guess” meaning without user control. This leads to frustration, loss of agency, and exclusion from everyday communication.
Intentify was inspired by a simple question: What if communication systems listened for intent instead of perfection?
What it does
Intentify is an assistive communication app that enables intent-first communication.
Users build a personal library of short, meaningful intent phrases such as:
- “I need more time to respond”
- “Please call my caregiver”
- “I’m in pain here”
- “Yes / No / Not yet”
When a user speaks, even if the speech is incomplete or unclear, Intentify:
- Transcribes the audio
- Matches it against the user’s approved intent library
- Suggests the most likely intended meaning
- Requires explicit user confirmation
If no confident match exists, AI suggests a new intent, but the user always decides.
Once confirmed, Intentify can speak the intent clearly using synthesized speech or provide clean text for messaging, allowing users to communicate confidently and on their own terms.
How we built it
Intentify is built as a mobile-first, serverless system focused on speed, privacy, and user control.
Frontend
- Expo + React Native
- Simple, low-friction UI designed for cognitive and motor accessibility
- Automatic intent suggestions with clear confirmation flows
- Local device-based identity for rapid prototyping
Backend
- AWS API Gateway + Lambda (Node.js)
- Audio uploaded securely to Amazon S3 via presigned URLs
- Speech transcription and intent matching handled server-side
- DynamoDB stores intent libraries and recording metadata
AI Integration
- OpenAI Speech-to-Text for transcription
- Embeddings + cosine similarity for intent matching
- GPT-based fallback intent suggestion when confidence is low
- Amazon Polly generates clear, natural speech for confirmed intents
The system is designed so AI assists, but never overrides the user.
Challenges we ran into
- Designing AI assistance without removing user agency
- Handling low-confidence speech without forcing incorrect matches
- Balancing real-time responsiveness with serverless constraints
- Avoiding bias introduced by over-automation
- Scoping a complex accessibility problem into a hackathon-sized MVP
Accomplishments that we're proud of
- Building a working end-to-end intent-first communication system
- Designing AI fallback that only activates when confidence is low
- Ensuring user confirmation is always required before output
- Creating a system that speaks for the user without speaking over them
- Strong alignment with accessibility, consent, and bias-mitigation principles
What we learned
- Accessibility failures often come from assumptions, not lack of technology
- Users value control and confirmation more than “smart” automation
- Separating intent from delivery dramatically reduces communication friction
- Ethical AI design starts with deciding when not to automate
What's next for Intentify
*Offline mode for accessing intent libraries without internet *Login and sign up for secure, personalized accounts *User preferences for voice avatars and speech styles *Speech-to-text correction to improve intent matching accuracy *Custom LLM fine-tuned for intent-first communication and speech impairment patterns
Built With
- amazon-dynamodb
- amazon-web-services
- api
- expo.io
- lambda
- node.js
- openai
- polly
- react-native
- s3
- sam
- typescript
Log in or sign up for Devpost to join the conversation.