Inspiration
Cognitive performance fluctuates daily due to fatigue, stress, attention load, and motor variability, yet most existing tools rely on self-report, intrusive testing, or labeled clinical data. We were inspired by the idea that everyday behavior already contains rich cognitive signals, especially typing patterns that people generate passively throughout the day. We wanted to build a system that could learn an individual’s baseline, detect when their behavior meaningfully drifts, and explain why, without reading content, collecting sensitive information, or requiring medical datasets. The goal was a privacy-preserving, personalized signal of cognitive stability rather than a diagnostic tool.
What it does
NeuroBaseline is a cognitive stability dashboard built from keystroke dynamics. It computes a Neuro Variability Index (NVI) on a 0-100 scale that reflects how stable a user’s behavior is relative to their personal baseline, where lower scores indicate sustained deviation rather than single-day noise. The system extracts daily behavioral features such as reaction-time variability, typing rhythm variability, and correction frequency, detects statistically significant change points, applies unsupervised machine learning to compute a multivariate anomaly score, and produces human-readable explanations identifying the dominant driver of change.
How we built it
The backend consists of a Python pipeline for data generation, feature extraction, scoring, and explanation generation. Features are computed using rolling windows and normalized against a robust personal baseline. The Neuro Variability Index is derived from a weighted and smoothed deviation from this baseline, while sustained behavioral shifts are identified using change-point detection with PELT segmentation. For machine learning, we used an Isolation Forest trained only on the user’s baseline window, allowing the model to detect deviation without relying on labeled medical data. Daily anomaly scores are normalized to a 0-100 scale, and feature-level attribution is performed by measuring deviation from baseline to identify the dominant driver of change, such as typing rhythm versus reaction time. To connect with the frontend, we also built a small FastAPI app that exposes three endpoints:
- GET /health : basic liveness check
- GET /results: returns the contents of backend/results.json (if present)
- POST /analyze: runs backend/run_pipeline.py (using the same Python interpreter) The pipeline is expected to write or update backend/results.json.
The frontend is built with Next.js and React and presents an interactive dashboard that visualizes NVI trends, feature-level time series, and both rule-based and ML-generated explanations. For demo simplicity, the frontend reads from a static results file, but the system is designed to be easily replaced with a live API.
Challenges we ran into
One of the main challenges was designing a stability metric that captures sustained behavioral drift rather than reacting to single-day spikes. Another challenge was balancing machine learning performance with interpretability, since anomaly scores alone are not useful without clear explanations. We also had to carefully coordinate backend outputs with frontend visualization so that statistical results, ML scores, and explanations stayed aligned under tight time constraints. Rapid iteration during the hackathon led to several Git and merge conflicts, which required careful version control to avoid breaking the pipeline while continuing to improve the interface and analysis logic.
Accomplishments that we're proud of
We are proud of building a fully functional, end to end cognitive stability system that integrates statistics, machine learning, and a polished user interface. The project correctly applies unsupervised learning to a personalization first problem, avoiding the need for labeled medical data. We also successfully generated explanations that are understandable to users rather than opaque model outputs.
What we learned
We learned that unsupervised machine learning is often the most appropriate approach for personalized, health adjacent signals where ground truth labels do not exist. We also learned that interpretability is just as important as detection, since users need to understand why a change occurred in order to trust the system. Throughout the project, we saw how simple behavioral signals can carry meaningful cognitive information when modeled correctly.
What's next for NeuroBaseline
Next, we plan to replace the static results file with a live keystroke ingestion. We also want to add longitudinal user profiles with adaptive baselines that evolve over time.



Log in or sign up for Devpost to join the conversation.