Inspiration
CodeOrigin was inspired by a persistent problem in technical workplaces. Contributions are not always credited fairly, and women’s work in particular is often overlooked, minimized, or absorbed into team output without clear attribution. We wanted to build a tool that shifts recognition away from perception and toward evidence. Git already contains a detailed history of who did what, and CodeOrigin transforms that raw history into clear, objective insight.
What it does
CodeOrigin analyzes any repository that uses Git, regardless of hosting platform. Users paste a repository URL, and the application clones it locally to inspect commit history and code changes. Each commit is evaluated using a generative AI model across multiple dimensions, including technical complexity, scope of impact, code quality, risk and criticality, knowledge sharing, and innovation. The results are presented at both the commit and contributor level, allowing users to explore individual contributions, track patterns over time, and understand how work is distributed within a team.
Impact and use cases
CodeOrigin can be used by individual contributors to document their work, by teams to better understand collaboration and workload distribution, and by managers to support more objective performance evaluations. By grounding recognition in concrete contribution data, the tool helps reduce reliance on visibility, self-promotion, or bias when assessing impact.
How we built it
We built CodeOrigin as a Streamlit web application with a modular backend. The system clones repositories locally, extracts commit metadata and diffs, and processes commits in batches. A generative AI model evaluates each commit and produces structured scores along with human-readable explanations. Session state is used to manage progress and store analyzed commits so users can interactively explore results without unnecessary reprocessing. We designed clear visualizations for commit analysis and contributor profiles, emphasizing transparency and explainability rather than opaque scoring.
What's next for CodeOrigin
Next, we plan to refine our evaluation criteria to improve consistency and reduce bias, while expanding support for larger and more complex repositories. We also want to add exportable reports that contributors can use in performance reviews, promotion discussions, and team evaluations. Ultimately, our goal is to make CodeOrigin a practical tool for fair, evidence-based recognition in real workplace settings.



Log in or sign up for Devpost to join the conversation.