Bias bounties are collaboratively designed sets of challenges that bring together researchers, impacted communities, and domain experts to rigorously examine and improve AI / ML systems, models, and datasets. Humane Intelligence is launching new challenges in September and October 2025.
Red teaming is a semi-structured testing approach to assess and improve the safety and effectiveness of AI models and systems by identifying vulnerabilities, limitations, and potential areas for improvement. Humane Intelligence offers red teaming events as a paid service.
AI Contextual Evaluations are rigorous, mixed-method, bespoke evaluations designed to give a comprehensive analysis of an AI model or system’s performance for a specified problem space. Humane Intelligence designs and runs contextual evals as a paid service.