Inspiration

Mental health struggles often go unnoticed until they reach a critical point. I’ve seen people, including friends and students, silently go through emotional distress without getting timely help. This inspired me to build MindWhisper — an AI-powered, proactive mental-health companion that detects emotional shifts early and provides instant support, nudges, and emergency response.

I wanted to create something that:

  • Understands a user’s mood in real time

  • Predicts their well-being trajectory

  • Provides compassionate AI support

  • Alerts loved ones in critical situations All while being accessible on PC and Android.

What it does

MindWhisper is a multimodal emotional-wellbeing system that combines real-time analysis, behavioral prediction, and emergency assistance.

Key Features

• Emotion Detection (Webcam, Voice, Text) Live emotion tracking using HuggingFace models + OpenCV.

• Predictive Well-Being Analysis Uses combined signals (text + facial emotion + sentiment) to estimate the user's upcoming mental state.

• CBT-Based Nudge Generator Generates supportive messages, grounding techniques, affirmations, or humor depending on the user’s mood.

• Emergency Self-Harm Detection Triggers alarms, notifications, SMS/calls, and a guardian dashboard.

• AI Companion Mode Plays soothing music, binaural beats, motivational audio, or stand-up comedy based on user-selected interests.

• Interest-Based Personalization User preferences for music, hobbies, food, passions, etc. shape the AI’s conversational tone and recommendations.

• Secure Data Logging via MongoDB Emotional logs, predictions, and user activity securely stored for trend visualization.

• Cross-Platform Deployment Works on both PC and Android WebView/PWA.

How I built it

MindWhisper is built using a modular architecture and modern AI tooling:

Tech Stack

  • Frontend: Streamlit (multi-page UI), Bootstrap prototype

  • Backend: Python

  • Database: MongoDB Atlas

  • AI Models: HuggingFace Transformers (emotion, sentiment, toxicity)

  • Computer Vision: OpenCV + deep learning models

  • Emergency Systems: Email, SMS, call integrations (Twilio / APIs)

  • Audio Generation: Text-to-Speech + personalized AI voice clones

  • Dashboard & Visualizations: Streamlit charts + predictive modeling

Architecture

Input Layer — Webcam frames, microphone audio, journal text, uploaded files

Analysis Layer —

  • Emotion detection

  • Sentiment classification

  • Toxic/self-harm intent detection

  • NLP-based well-being prediction

  • Decision Layer

Logic routes the user to:

  • Guardian Dashboard (crisis)

  • AI Companion Mode (non-critical support)

  • Normal dashboard (stable state)

  • Response Layer

  • CBT nudges

  • Audio output

  • Alerts (email/SMS/call)

Storage Layer Emotional logs stored in MongoDB

Visualization Layer Daily/weekly/monthly emotional insights

Challenges I ran into

• Integrating multiple AI models (text + webcam + audio) in real time

• Ensuring low latency for emotion detection in Streamlit

• Balancing sensitivity in self-harm detection without false positives

• Securing database operations for private emotional data

• Designing an app flow that is comforting, not overwhelming

• Compatibility constraints across PC, mobile, and Android WebView

• Reliable emergency routing when SMS/calls fail

• Creating a personalized AI voice clone ethically and safely

Accomplishments that I’m proud of

Successfully built a full AI-driven multimodal mental-health system

Implemented real-time emotion detection with webcam and text

Built a CBT-based nudge generator that adapts to user mood

Achieved integration with MongoDB for secure logs

Designed an AI Companion Mode with music, humor, voice output

Created automated emergency workflows with fallback mechanisms

Developed a mobile-ready version that runs in Android WebView

Built an end-to-end prototype that feels empathetic and human-centered

What I learned

How to fuse NLP, computer vision, and predictive modeling into one coherent system

Using Transformers and sentiment models effectively in real time

Streamlit’s advanced state management for multi-page navigation

Designing mental-health tools ethically and responsibly

Building user-centric experiences that prioritize safety

Combining AI with real-world communication APIs (email, SMS, call)

Efficient MongoDB document modeling for emotional logs

Deploying cross-platform apps with minimal friction

What's next for MindWhisper

Integrating real-time ECG/heart-rate data from smartwatches

Adding 24x7 AI conversational therapy with RAG memory

Building a Guardian Mobile App for alerts and monitoring

Offline emergency mode using local TTS + device sensors

Introducing daily wellness streaks, gamification, and habit nudges

Deploying MindWhisper as a Progressive Web App (PWA)

Extending the system with voice cloning for supportive messages

Publishing the project as an open-source mental-health toolkit

Built With

Share this project:

Updates