Inspiration

Noise in hospitals and other health care rooms has been shown to obstruct communication among staff, causing annoyance, irritation, and fatigue, all of which have a negative impact on healthcare quality and safety. According to AMN healthcare education services, the noise levels far exceeded the World Health Organization's recommendations for average hospital-room noise levels, with peak noise levels of 80.3 dB – almost as loud as a chainsaw. We can conclude that high noise levels harm healthcare workers as they try to communicate and stay focused in a hectic environment for long periods of time, especially now that COVID is in effect.

What it does

Coherence addresses one of the most common causes of hospital miscommunication: excessive background noise. Our mobile app lets medical personnel communicate in real-time without picking up background noises. In addition, we alter their voices and amplify common vocal patterns to ensure that no information is misunderstood.

How we built it

The front-end of the mobile app was built with react-native, and the database was Firebase. Users must first authenticate with Firebase before being asked to enter a unique id that corresponds to a specific room via an API call to a node server, allowing them to join a network of other users. Then, to record audio, users would press and hold the microphone button for a few seconds before releasing it. We'll send the audio file to our flask server via an API endpoint once we've released the record button, where it'll be processed by our noise suppression and voice amplification algorithms. We upload the finalized file to Firebase Storage after it has been processed in the flask API. The file name and ID are then updated in a real-time database. The rest is taken care of by Firebase. The real-time database sends an event to all of the clients in the room, ensuring that everyone receives and listens to the most recent audio as soon as it's recorded and processed.

Challenges we ran into

We wanted Coherence to work in real-time across multiple devices, and this was our first attempt at something so complex. We used Google's Firebase's real-time and file cloud storage databases to accomplish this. The interactions ended up being a lot more complex than we thought, when we sent the audio file of the recording to the API, we had to first push it to cloud storage, and then update the real-time database with its signature. However, in the end, we were able to streamline the process, allowing us to connect as many devices as we wanted to a single audio receiver.

Accomplishments that we're proud of

Learning how to use Firebase for real-time systems and how to connect it to it. Completing the prototype ahead of schedule and maintaining excellent time management.

What's next for Coherence

Our top priority is to improve our noise suppression and voice amplification algorithms. We'd like to be able to support a wider range of background noise sources, such as sirens. Following that, we want to improve the rooms system by showing the total number of members in a room as well as their individual latencies. Working with a variety of new technologies and ways of thinking has undoubtedly piqued our interest in what lies ahead for Coherence!

Share this project:

Updates