Inspiration
Most people don’t talk about how they’re doing — even to themselves.
But they do write.
The inspiration for Reflecta came from noticing how journaling is often treated as a one-off coping mechanism instead of a long-term mirror. Pages get written, closed, and forgotten. The emotion disappears into text.
We asked a simple question:
What if journaling could show you who you’re becoming over time?
Not as therapy.
Not as diagnosis.
Just as reflection.
What it does
Reflecta is a privacy-first identity reflection app.
Users write daily journal entries. From that writing, Reflecta:
- Extracts emotional signals (sentiment, stress, intensity, energy)
- Converts them into a daily Mental Health / Identity Score (MLH) on a 0–100 scale
- Visualizes those scores over time as a calm, non-judgmental timeline
- Highlights Identity Shifts — moments where something clearly changed
- Generates a short AI-written reflection written in the user’s own tone
There are no diagnoses, no labels, and no advice.
Reflecta is not a therapist.
It’s a mirror.
How we built it
The system follows a simple but intentional pipeline:
Write → Analyze → Score → Visualize → Reflect
Core Stack
- Next.js (App Router) for the frontend and routing
- Tailwind CSS for a calm, minimal UI
- Supabase for authentication and Postgres storage
- Google Gemini API for emotional signal extraction and reflection generation
- Custom MLH Algorithm (rule-based + weighted factors)
- Recharts for visualizing identity shifts over time
We deliberately separated:
- Scoring logic (deterministic, transparent)
- AI reflection (expressive, human, non-clinical)
This allowed us to avoid black-box scoring while still using AI for narrative insight.
Challenges we ran into
Technical Challenges
Separating “analysis” from “advice”
Early AI outputs sounded too much like therapy. We had to aggressively constrain prompts to ensure the model acted as a reflective narrator — not a mental health professional.Designing a meaningful score
Reducing complex emotion into a single number felt dangerous. We iterated multiple times to ensure the MLH score represented change, not judgment.
Human & Team Challenges
This project wasn’t just technically hard — it was emotionally uncomfortable.
We were building something that forces introspection. That meant:
- Debating ethics late into the night
- Questioning whether we were crossing a line
- Scrapping features that felt impressive but wrong
At one point, we almost pivoted away entirely because we didn’t want to unintentionally harm users. The breakthrough came when we reframed the app:
Reflecta does not tell you who you are — it shows you who you were.
That clarity aligned the entire team.
Accomplishments that we're proud of
- Building an AI-powered product that explicitly avoids medical framing
- Designing a scoring system that is interpretable and user-respecting
- Creating reflections that feel personal without being invasive
- Maintaining a strong ethical stance under hackathon pressure
- Shipping a product that feels calm, not noisy — rare in AI apps ## What we learned
- AI doesn’t need to advise to be helpful
- Numbers can be reflective if framed carefully
- Ethics is not a feature — it’s a design constraint
- Emotional products require slower, more deliberate decisions
The hardest problems aren’t technical — they’re conceptual
What's next for Reflecta
Next, we want to expand Reflecta into a deeper identity system:
Weekly and monthly identity summaries
Optional voice & video reflections generated from past entries
Stronger control over how data is interpreted and visualized
Long-term pattern detection without surveillance
Even clearer consent and transparency tools
Ultimately, Reflecta aims to become a personal archive of self, not a solution.
You change every day. Reflecta helps you notice.
Built With
- google-gemini
- mhl
- next.js
- recharts
- supabase
- tailwindcss
- typescript

Log in or sign up for Devpost to join the conversation.