Inspiration

Giving every hand a voice.

Communication should never be a limitation. Yet millions of Deaf and mute individuals rely on sign language that most of the world does not understand. In classrooms, workplaces, and everyday interactions, this creates an invisible barrier. FlexTalk was inspired by the simple but powerful question: What if gestures could speak for themselves—instantly and accurately? We wanted to bridge the gap between sign language users and non-signers using affordable, wearable technology.

What it does

FlexTalk is a wearable sign-language translation system that converts hand gestures into readable text in real time. • Uses flex sensors embedded in a glove to capture finger movements • Processes gesture data using machine-learning-based classification • Translates recognized signs into instant text output on a display • Enables seamless communication between Deaf/mute users and non-signers

The system is lightweight, portable, and designed for real-world daily use, not just labs.

How we built it • Hardware • Flex sensors mounted on fingers to capture bending patterns • Microcontroller (Arduino) for sensor acquisition and processing • LCD interface for real-time text output • Software & Data • Custom dataset collection for multiple hand gestures • Feature extraction from sensor readings • Machine learning (Decision Tree–based classification) • Optimized inference logic for real-time performance

Built With

Share this project:

Updates