SkincareIQ

Inspiration

Skincare should be simple. It isn't.

One of us has eczema. That means years of rotating products (dermatologist-recommended, Reddit-approved, friend-vouched) and a skin barrier that paid the price whenever the combination was wrong. A retinol here, a glycolic acid there, and suddenly your face is inflamed for a week. Not because the products were bad, but because nobody told you they were destroying each other at the molecular level.

Millions of people with eczema, rosacea, acne, or sensitive skin deal with this. The skincare industry has gotten more complex, not less. Ingredient lists read like chemistry dissertations. The consequences of getting it wrong aren't just wasted money: they are flare-ups, barrier damage, and real physical pain.

Here's what frustrated us most: the information already exists. Every conflict we built into SkincareIQ is documented in peer-reviewed dermatology literature. Benzoyl peroxide oxidizes retinol (documented by Draelos in 2006). AHAs and retinoids stack exfoliation past what most skin barriers can tolerate (Leyden, 2017). None of it is secret. All of it is completely inaccessible to a normal person standing in Sephora trying to figure out if two serums will wreck their skin.

We built SkincareIQ in 12 hours to close that gap. None of us had ever been to a hackathon before.


What It Does

SkincareIQ analyzes the chemistry of your entire skincare routine, not just individual products in isolation. You add 2 to 4 products, share your skin type and any sensitivities, and we run a two-layer analysis: a deterministic chemistry engine that catches known conflicts with certainty, followed by Gemini 2.5 Flash for the nuanced interpretation and personalization that AI is genuinely good at.

The output is a color-coded conflict report (high, medium, and low severity) with plain-English explanations of the underlying mechanism, literature citations, and safer alternatives when serious interactions are found. Then we build a personalized AM/PM routine that automatically splits incompatible actives across alternate nights.


How We Built It

We are three people with three very different backgrounds: mathematical economics and CS, finance and chemistry, and AI and mechanical engineering. That mix turned out to matter a lot.

The core architectural decision came from our AI/engineering background: don't let a language model its own ground truth for chemistry. Most teams would prompt Gemini and ask it to reason about ingredient interactions. We didn't. Our finance and chemistry background informed the other half: build the conflict database from actual literature, not from what the model thinks it knows.

Layer 1: The Chemistry Engine

A local Python module that runs entirely before any AI is called. It does three things.

First, INCI normalization. Raw ingredient strings are inconsistent in ways that are hard to anticipate until you're staring at real product data. "Granactive retinoid," "tretinoin," "hydroxypinacolone retinoate," and "retinyl palmitate" are all retinoids, but no naive string match would catch that. We built a synonym map that normalizes every known variant to a canonical key via case-insensitive substring matching after stripping INCI annotations.

Second, a hand-curated conflict database of 14+ ingredient pairs. Each conflict is tagged with a mechanism (oxidation, over-exfoliation, pH conflict, chelation), a severity level, a plain-English explanation, and a primary literature citation. When benzoyl peroxide and retinol are both present, that conflict is flagged deterministically, the same way every time, with no chance of hallucination.

Third, NIH PubChem verification. For any unrecognized active-looking ingredients, we query the PubChem public REST API and retrieve verified molecular formula, IUPAC name, and CID. That data gets passed to Gemini as structured grounding, giving it real chemistry to work from.

Layer 2: Gemini 2.5 Flash, Grounded Not Free-Running

The engine's findings are injected directly into the Gemini prompt as a structured context block:

"We already know these conflicts exist. Here is the mechanism and the source. Expand on this. Do not contradict it."

Gemini then handles what it's actually good at: formulation balance analysis, skin-type personalization, synergy detection, routine recommendations, and alternative product suggestions, all anchored to verified chemistry rather than generated freely.

That boundary between deterministic detection and AI interpretation is the decision we're most proud of. It's what makes SkincareIQ's outputs trustworthy in a way that a pure LLM approach can't be.

Frontend and Backend

Plain HTML, CSS, and vanilla JS on GitHub Pages. No framework overhead. Product search pulls real INCI ingredient lists from Open Beauty Facts. Products added by name fall back to Gemini for ingredient inference, with a clear disclosure to the user that chemistry-engine checks are skipped for those products. That transparency matters when the output is health-adjacent.

The backend is Flask on Google Cloud Run, stateless and containerized. The chemistry engine runs synchronously before the Gemini call, adding negligible latency while dramatically improving reliability.


The Design

We wanted the conflict report to feel like something you'd trust, not something that felt like a chatbot output wrapped in a card.

The severity system – high in red-tinted cards with a left border accent, medium in amber, low in green – lets you see the risk profile of your routine at a glance before reading a single word. Conflicts that were caught by the chemistry engine carry a badge that says so, distinguishing deterministic findings from AI-generated ones. The four-step guided flow (Profile, Products, Analysis, Routine) keeps the experience linear so you're never deciding what to do next.

We built the entire UI from scratch in 12 hours with no component libraries.


Challenges

Getting Gemini to respect the grounding context was harder than expected. Early versions would acknowledge the engine's findings and then reason past them anyway, generating conflicts the engine had already ruled out or missing ones it had flagged. The prompt structure went through several iterations before the model consistently treated the pre-computed context as fact to elaborate rather than claim to evaluate.

INCI normalization required more manual curation than we anticipated. The same retinoid shows up under a dozen trade names and brand-specific labels across real product databases. You don't discover that until you're testing against actual Open Beauty Facts data at 2am.

Building a conflict-aware routine scheduler in vanilla JS, no libraries, while also handling graceful degradation for products without ingredient data, was more state management than it looks from the outside.


What We Learned

The gap between "this information exists" and "this information is accessible" is where real products live.

We also learned that the best AI architecture decisions are often about constraints. Gemini grounded in structured chemistry context is a fundamentally different tool than Gemini asked to reason freely about the same question. The most important design choices we made were about what we would not let the model do.

For three first-time hackathon participants, two of us finishing our first year of college, building something we'd genuinely use felt like the right goal. We think we got there.


What's Next

  • Expanding the conflict database with additional journal-cited pairs
  • Barcode scanning for instant product lookup
  • Longitudinal shelf tracking — flag new products that conflict with what you already own
  • Dermatologist review for clinical validation of the conflict database

Built With

Share this project:

Updates