Bias bounties are collaboratively designed sets of challenges that bring together researchers, impacted communities, and domain experts to rigorously examine and improve AI / ML systems, models, and datasets. Our final self-hosted bias bounty closed in November 2025. In 2026, we are working to move our bias bounty program onto Zindi, a global data science challenge platform.
Red teaming is a semi-structured testing approach to assess and improve the safety and effectiveness of AI models and systems by identifying vulnerabilities, limitations, and potential areas for improvement. Humane Intelligence offers red teaming events as a paid service using our own software, which will are releasing under an open source software license in 2026.
AI Contextual Evaluations are rigorous, mixed-method, bespoke evaluations designed to give a comprehensive analysis of an AI model or system’s performance for a specified problem space. In 2026, we are rapidly developing our ontological / knowledge-based AI problem space mapping and contextual AI evaluations. Humane Intelligence designs and runs contextual evals as a paid service.
Our web application was designed for the data collection aspects of our hosted red teaming events and our contextual evaluation services. Check out our demo and read about our plans to release the app as an open source software.