💡Inspiration:

Google glasses, AR/VR assistance tech.

❓ What it does:

Records live audio input and displays a transcription with labels for every unique individual's voice.

⚒️How we built it:

Our team utilized DeepGram and PyNote to record live audio and transcribe it into text using a machine learning model trained to differentiate between multiple voices using speaker diarization

🏋️‍♂️Challenges we ran into:

Connecting the React.js front end and Python back end was challenging, as it was our first time working with this kind of thing, so most of us had little to no experience with this.

🏆 Accomplishments that we're proud of:

Getting a cohesive project and reactive website which uses AI technology to recognize an individual's voice.

📚What we learned:

Our group feels that we now have a good understanding of how connecting a projects front end and back-end features works, as well as how to work with machine learning algorithms.

⌛What's next for Visibly:

Now that we have a working version of the software, our team wants to develop hardware which will attach to a pair of glasses and allow a person to display text in real life. We would also like to add additional features such as tone recognition, name detection, and automatic language translation as well as training our own model for improved speaker recognition.

Built With

Share this project:

Updates