Inspiration
Some of us are pursuing Language based i-explore modules which require us to be able to confidently converse in the chosen language with strangers. There is a difficulty in finding language partners and having dedicated teaching time towards speaking.
What it does
This solves that issue by allowing 1 to 1 interaction with a human like avatar to build confidence and understanding verbal communication better through personalised real-time feedback. It also provides reports to the teacher which lets them build their lessons tailored to what students have been struggling with.
How we built it
We used react along with next in our tech stack for the web application along with python for the models we are using. We also used tailwind css and daisy-ui to make stuff pretty.
Challenges we ran into
Multilingual speech to text was something we had to deal with. The models don't run fast enough to make real time responses easy, this is more a resource issue on our end and it is possible to do this with better equipment.
Accomplishments that we're proud of
We are proud of the fact we got a working demo and a decent ui to explain the larger application to the judges.
What we learned
Deep faking is easy and pretty scary. It can also be used for a multitude of purposes.
What's next for LinguaLink AI
Expanding to a larger product base where we have different languages and progressive difficulty levels. Add things like vocabulary lists and quizzes to be uploaded by teachers to help practice necessary skills conversational as well.
Also we could prospectively expand to other subjects apart from languages and use the same feedback and deepfake for better feedback mechanism process to cater to assignments there. Even interview prep could be a possible extension.
Built With
- javascript
- next-js
- python
- react
- tailwind-css
Log in or sign up for Devpost to join the conversation.