Bias bounties are collaboratively designed sets of challenges that bring together researchers, impacted communities, and domain experts to rigorously examine and improve AI / ML systems, models, and datasets. Our Accessibility Bias Bounty is now open for submissions and closes November 7, 2025!
Red teaming is a semi-structured testing approach to assess and improve the safety and effectiveness of AI models and systems by identifying vulnerabilities, limitations, and potential areas for improvement. Humane Intelligence offers red teaming events as a paid service.
AI Contextual Evaluations are rigorous, mixed-method, bespoke evaluations designed to give a comprehensive analysis of an AI model or system’s performance for a specified problem space. Humane Intelligence designs and runs contextual evals as a paid service.
Our web application was designed for the data collection aspects of our hosted red teaming events and our contextual evaluation services. Check out our demo and read about our plans to release the app as an open source software.