Inspiration

Technical hiring is time intensive and often inconsistent, especially for companies without deep engineering resources. We saw an opportunity to streamline take home assessments by automating the entire workflow. From assignment distribution to evaluation. Our platform enables interviewers to generate private GitHub repositories with a single input. Once a pull request is submitted, our AI automatically runs tests and generates a performance summary, reducing manual review time. This approach ensures consistency, scalability, and objective assessment helping companies make faster, data-driven hiring decisions with minimal engineering overhead.

What it does

We’ve re-imagined the take-home assessment process to be smarter, faster, and more scalable for modern technical teams. Traditionally, evaluating candidates through coding assignments requires significant manual effort from setting up repositories and managing permissions to reviewing submissions and aligning on evaluation criteria. This becomes even more challenging for companies without dedicated technical interviewers or engineering bandwidth.

Our platform automates this entire workflow with just a few clicks. When an interviewer decides to proceed with a candidate, they simply input the candidate’s GitHub username and select a repository template through our dashboard. Instantly, our system provisions a new private repository, creates a dedicated candidate branch, and grants access only to the candidate.

Once the candidate completes their assessment and submits a pull request, a Webhook is automatically triggered. This initiates our built in AI evaluation engine, which runs a series of dynamic tests tailored to the repository's criteria. The AI then scores the solution, checks for edge cases, analyses code quality, and generates a concise, insightful summary of the candidate’s performance.

The results? A complete with a score and code analysis are sent directly back to the dashboard, giving the interviewer an objective, structured overview of the submission. This eliminates guesswork, reduces bias, and helps teams make more confident hiring decisions without requiring hours of engineering review.

Whether you're a fast-growing startup or a large organisation looking to scale technical hiring, our platform makes it effortless to run consistent, fair, and automated coding assessments saving time while improving candidate experience and evaluation accuracy.

How we built it

We built our platform using a Node.js server to handle the Frontend and API interactions. For data persistence and candidate tracking, we integrated MongoDB to manage the admin panel allowing interviewers to monitor each candidate’s progress and assessment status in real time.

To automate repository creation and assessment delivery, we leveraged the GitHub API to generate private repositories with unique candidate branches. We also programmatically attached Webhooks to each repository to listen for pull request events.

When a candidate submits their solution via a pull request, the Webhook triggers our evaluation pipeline. This pipeline uses Google’s Gemini to dynamically generate test cases, evaluate the code, and produce a summary report along with a performance score. These results are then surfaced back to the admin dashboard for easy review and decision-making.

Challenges we ran into

Throughout the development of this platform, we encountered and overcame several key technical and operational challenges. One of the most significant was the automatic generation of meaningful, non-trivial test cases. We needed to ensure the AI-generated tests were relevant, accurate, and capable of evaluating real-world problem-solving skills.

Another major hurdle was integrating our database with the admin dashboard to provide real-time visibility into each candidate's progress. This required careful coordination between the backend logic and data flow to ensure assessments were reliably tracked and updated.

Working with the GitHub API also presented complexity, particularly in securely and efficiently creating private repositories at scale and assigning unique branches for each candidate. Attaching and configuring webhooks to monitor pull requests introduced additional technical intricacies that demanded robust error handling and event-driven architecture.

Setting up the Node.js server to serve as the frontend and API layer required a solid understanding of asynchronous workflows, while merging contributions across team members during active development remained one of the most challenging aspects requiring careful version control, collaboration, and code integration practices.

Despite these challenges, our team successfully built a reliable, automated take-home assessment platform that enhances the technical hiring process from end to end.

Accomplishments that we're proud of

We’re proud to have built a fully automated, end-to-end system that connects GitHub, webhooks, AI, and our own admin dashboard into a seamless workflow. Automating the creation of private repos and candidate-specific branches using the GitHub API was a major win, especially handling authentication and scalability challenges.

We also implemented a reliable webhook system that triggers Gemini to generate dynamic test cases, run them, and return both a score and a summary all without manual input.

Ensuring real-time sync between MongoDB and the dashboard, while managing async operations and edge cases, was a key technical milestone. Despite tight timelines, we collaborated effectively, resolved tough merge conflicts, and delivered a fully functional tool we're proud of.

What we learned

As a full-stack project, this build pushed us to grow not just technically, but also as a team. We learned the importance of effective time management, clear communication, and strategically delegating tasks based on each team member’s strengths. From designing the system architecture to handling API integrations, we divided responsibilities in a way that maximized our efficiency and allowed us to move quickly without sacrificing code quality.

On the technical side, we gained valuable experience in scaling GitHub programmatically creating and managing repositories, branches, and webhooks through the GitHub API. We explored and implemented Gemini’s APIs to generate dynamic test cases and intelligent summaries, learning how to interface with AI in real-time workflows. We also built and maintained a Node.js server to connect all the pieces together while ensuring our MongoDB database remained synchronized with user activity.

Finally, we improved our Git workflow solving merge conflicts, managing branches, and keeping development aligned across multiple contributors. This project taught us how to ship a scalable, production-like system under pressure while staying agile and collaborative.

Share this project:

Updates