Inspiration:

One day, one of our group members Cole was travelling in Europe when he encountered a serious problem: He couldn’t communicate with anyone. Although he considered using Google Translate, he worried that its translations might not convey the true meaning of his words. This was especially concerning as he had allergies and needed shelter. How could he ensure his safety and well-being without effective communication? Furthermore, what about deaf individuals, who cannot understand the words spoken around them. While we are not deaf, we empathise with the isolation that communication barriers can create. This inspired us to create AccessAbilities, a universal-communication hub platform.

Value-Proposition/Positioning Statement:

For individuals who cannot speak the local language or are deaf, AccessAbilities is an all-in-one universal communication platform which empowers you to communicate seamlessly. It facilitates conversations across languages, enabling true universal communication and destroying existing barriers.

What it Does:

The AccessAbility platform takes in audio from the user, transcribes the audio in English, and then outputs the transcribed audio in the user's native/chosen language to the frontend. This in turn allows for two users to speak to each other, in their respective native tongue, and have the output display in the other user's respective native tongue.

How we Built It:

  1. First we individually built Phase 1, where we created our own custom AI copilots and the Microsoft Learning modules.
  2. Following completing Phase 1, we began brainstorming Phase 2 and came up with a long-list of ideas
  3. From this list of ideas, we narrowed it done to a few possible ideas that we could connect together (these were translation/transcription, speech emotion recognition, facial emotion detection)
  4. From here, we began trying to implement each of these distinct options and found that speech emotion recognition and facial emotion detection were unrealistic
  5. Although the Azure Face AI this would’ve hypothetically been possible, we needed approval and simply didn’t have time to make it work correctly. Otherwise, when provisioning and trying to customise the resource in Azure it would not load.
  6. After determining that we would move forward with the translation idea, we broke down the necessary steps for the remainder of the project. This includes: Frontend, a Speech Transcription backend, and an OpenAI backend. We also created a new collection in the CosmosDB database here to store user information.
  7. From here, we finished implementing each individual component, ensuring that they can connect with the database, and that they all work.
  8. After connecting the individual components we worked on deploying everything, ensuring that everything is live and deployed. We hosted everything through Azure as well.

Challenges we Ran Into:

  1. Challenge #1: The first challenge we ran into was Phase 1, or more specifically, deploying our backend API. We all had issues where it wouldn’t automatically configure and we had to work around with deployment options to get it to work correctly
  2. Challenge #2: The second challenge we ran into was implementing facial emotion recognition and speech emotion recognition pieces. Since we couldn’t get these implemented in the end, we put a lot of time into researching them and ensuring that they work, which unfortunately proved futile
  3. Challenge #3: Implementing a feature into our page in order to get it to correctly record and transcribe audio input was also fairly difficult and errored a lot
  4. Challenge #4: The last challenge we faced was actually deploying everything. We found it very complicated to provision and connect the necessary resources and do it

Accomplishments that We’re Proud of:

  1. Accomplishment #1: We all finished phase 1 successfully which we are all proud of. Successfully building a custom copilot (and deploying it) is a great learning experience!
  2. Accomplishment #2: Although we are competing against Software Engineers with decades of experience, we believe we produced a final product which holds real value to consumers and individuals. It is not just important, but incredibly valuable for bridging the gap in accessibility between consumers.
  3. Accomplishment #3: We successfully integrated CosmosDB and OpenAI into our final product, and we also managed to use numerous resources across the Azure suite.
  4. Accomplishment #4: We feel like our project can really make a difference.

What we Learned:

  1. The first thing we learned is to begin working earlier. As ¾ of the members' first Hackathon, we only began with 3 weeks left. This means that we are not able necessarily integrate and experiment with all the features we wanted to. Keeping this lesson in mind we now know to start much earlier, as this will really let us explore all possibilities.
  2. The second lesson we learned was actually about various Azure services. Throughout the Hackathon we learned about many different AI applications from ML, to LLMs, to Computer Vision. This will all serve useful in our future career and when updating AccessAbilities.
  3. The third lesson we learned is to think innovatively. When coming up with ideas it’s better to do some broad brainstorming before narrowing in. Since we started with narrowed down ideas this caused us to really focus in on the ideas of Facial Emotion Recognition and Speech Emotion Recognition, which in-turn, when we realised it was not feasible, took away time from working on the translation features.
  4. The last lesson we learned was to begin working on the backend before the frontend, and have all members help with the backend. We initially started with the frontend, which proved much more difficult since rather than building essentially a wrapper around our backend, we were trying to make our backend architecture fit our frontend.

What’s next for AccessAbility:

Next up for AccessAbility is looking towards how we can further bridge the gap between disabled individuals to truly access their potential. Whether it be providing blind people emotion recognition by face, as well as descriptions of what is around them, or providing deaf people more features such as speech emotion recognition to better understand those around them. The long-term goal for AccessAbility is to become a hub for providing accessibility options for individuals who need it, eventually becoming a platform where any disabled individual can arrive and get the help they need.

Built With

Share this project:

Updates