Inspiration

1 in 54 children in the US is diagnosed with an autism spectrum disorder (ASD) (CDC, 2020). A key development area in children is the ability to understand and express emotions. Autistic children often find it hard to:

  • recognise emotions, facial expressions and other emotional cues like tone of voice and body language.
  • understand and respond to other people’s emotions – they might lack, or seem to lack, empathy with others.

However, early intervention affords the best opportunity to support healthy development and deliver benefits across the lifespans of 47 million people with autism (WHO, 2019).

Acumen's mission is to build a toolkit that empowers children with autism in developing the ability to recognise and understand emotions at an early age. We aim to improve outcomes in Social Impact in children, their parents, and those they will interact with.

How we do so, Acumen:

  1. Provide discreet guidance for children to recognise the current emotions of people speaking to them – through audio processing and deep learning powered emotion recognition.
  2. Highlights to parents through data dashboards and visualisations; their child's progress – emotions induced in others over time and geographically. And also their child's stress levels – mental wellbeing.
  3. Gamify learning to recognise emotions through AR with Oculus Quest.

1. Provide discreet guidance for children to recognise emotions of people speaking. Humans have six basic emotions – happiness, surprise, sadness, anger, fear and disgust. By 5-7 years old, many autistic children can recognise happy and sad, but they have a harder time with subtle expressions of fear and anger. Being able to recognise negative emotions is critical, as it can lead to more positive interactions with others and prevent escalation. Acumen has developed a mobile application which discreetly assists the identification of more negative emotions. A parent cannot monitor their child forever. A child can be given a phone, and the audio recording feature is activated to record speech directly to the child. Our emotion classification model with ~80% accuracy in identifying 8 emotions classifies the emotion detected and causes the phone to vibrate at increasing frequencies with more negative emotions (sadness, fear, disgust, anger). The child receives a warning in the present, and in the future learns to associate each with a certain emotion – thus being guided in identifying emotions. We wanted a discrete form of data collection that wouldn't intrude on the privacy of others too intrusively (e.g. camera) and also a device commonplace enough to prevent the child from being ostracised (e.g. imagine a camera perched on their shoulder).

2. Highlights to parents through data dashboards and visualisations: Parents cannot always monitor their children. When their children go to school, to daycare, and to public outings – they worry. They cannot monitor their children 24/7 but a parent's worry is 24/7. They cannot rely on their children with autism, to share their feelings when they themselves do not fully understand them. Acumen processes data from the emotions detected from the model, location services, and heart rate & steps data from a Google Fit. This is presented through a web data visualisation dashboard. With a map to show emotions detected (emoji’s scale with magnitude) – allowing parents to know if their child is making people angry at school and address it early on. We also calculate stress levels of the child from steps and heart rate data; and trend that on a graph to help the parents know their child’s mental wellbeing over time.

3. Gamify learning to recognise emotions through AR with Oculus Quest: Learning emotions shouldn’t be a chore. And gamification is a powerful way to motivate children in learning. We created a simple proof-of-concept in Oculus Quest – we used a model that uses face.js to recognise facial emotions and overlay that on Oculus Passthrough mode (AR overlay on the real-world). For now, the screen shakes when a happy face is detected, but we want to add more game-elements. So that the parent can use it to teach their children about emotions with an in-built reward mechanism.

Features

Mobile app for kids

  • Record speech interactions from surroundings directed at child

  • Display emotion detected from speech and discreetly signal to child through vibrations (increasing frequency for more negative emotions like anger)

  • Record user health data related to the interaction (stress level calculated from heart rate & steps speed from Google Fit)

  • Record location data, and compile with detected emotion results

  • Uses Twilio to provide OTP login verification

Web dashboard for parents

  • Web dashboard can show a map of emoji’s (size based on frequency from child’s conversations and interactions with others)

  • Calculate stress from Google Fit data.

  • Web/app can show interaction and stress history.

Oculus Quest AR proof-of-concept

  • Facial emotion detection via Oculus cameras
  • Overlay using passthrough mode

How we built it

  • Web development using react

  • Mobile development using react native

  • Back-end development using python

  • Emotion detection via speech and facial recognition using Python in Jupyter

  • Location detection with Google maps

  • Vibration notification using expo

  • Twilio for OTP

  • Oculus AR (Passthrough mode) in Oculus Quest

  • Face.js for facial emotion recognition

Challenges we ran into

We are an international team with 4 timezones, spread across 4 continents: North America, Asia, Europe and Australia. Not all of us are awake at the same time. We overcome this by effective communication via discord and planning via miro. We started out as total strangers and came out with having built a project end to end that tackles a pertinent problem. Audio File formatting and encoding is an issue that can be tricky sometimes when dealing with compressed file transfers.

Accomplishments that we're proud of

We built a functional prototype! The technology we used is cutting edge (based on recent research papers) and employs various methodologies which are state of the art for the domains we built in. Oculus Quest AR app built in a short time Audio file formats, encodings and application deployment with background recording and audio uploads Flask ngrok on google colab to serve models

What we learned

We learnt a lot about using different frameworks and APIs for emotion detection Sound processing Audio Emotion Recognition using state of the art datasets Mobile app file transfers using react native and expo Facial emotion recognition Stepped into AR in Oculus.

What's next for Acumen

  • Conditioning: For kids to learn how to manage emotions, via wristband linked to hardware sensors
  • Data Privacy Measures: audio files are deleted immediately. Intelligent NLP insights done after anonymising data.
  • Natural Language processing on speech-to-text transcribing, to detect instances of unhealthy and bullying conversations using GPT-3 API.
  • Intelligent detection of inappropriate conversations and behaviour
  • Gamified AR and VR experiences to provide low-cost emotion training with in-build reward mechanism.

References

Autism Speaks. (2020). Autism Statistics and Facts | Autism Speaks. [online] Available at: https://www.autismspeaks.org/autism-statistics-asd#:~:text=Autism%20Prevalence,)%2C%20according%20to%202016%20data.&text=Boys%20are%20four%20times%20more,diagnosed%20with%20autism%20than%20girls. [Accessed 7 Mar. 2021].

Raising Children Network. (2020). Emotional development in autistic children. [online] Available at: https://raisingchildren.net.au/autism/development/social-emotional-development/emotional-development-asd#:~:text=Emotions%20and%20autistic%20children,to%20lack%2C%20empathy%20with%20others. [Accessed 7 Mar. 2021]. ‌ World Health Organization: WHO (2019). Autism spectrum disorders. [online] Who.int. Available at: https://www.who.int/news-room/fact-sheets/detail/autism-spectrum-disorders#:~:text=It%20is%20estimated%20that%20worldwide,figures%20that%20are%20substantially%20higher. [Accessed 7 Mar. 2021].

marcogdepinto (2020). marcogdepinto/emotion-classification-from-audio-files. [online] GitHub. Available at: https://github.com/marcogdepinto/emotion-classification-from-audio-files [Accessed 7 Mar. 2021].

Rohan Sawant (2020). I built an AI Tool to detect your facial expressions while you watch a video! [online] DEV Community. Available at: https://dev.to/rohansawant/i-made-an-ai-tool-to-detect-your-facial-expressions-while-you-watch-a-video-4g4n [Accessed 7 Mar. 2021].

Built With

Share this project:

Updates