The Future of Bias Bounties
Thanks to the generous support of the Heising-Simons Foundation, Humane Intelligence nonprofit is excited to announce that we are partnering with Radiant Earth to move our bias bounty program onto Zindi, a global data science platform with users in more than 185 countries! As we detailed in a concept note earlier this year, moving the bias bounty program onto Zindi will help us:
Check this page and our volunteer page in early 2026 with information about how to get involved.
Unlike traditional bug bounties that target code errors, Humane Intelligence’s algorithmic bias bounties focus on discovering the root causes of biased or exclusionary outcomes in AI systems. Bias bounties are collaboratively designed sets of challenges that bring together researchers, impacted communities, and domain experts to rigorously examine and improve AI / ML systems, models, and datasets. Instead of treating bias as an abstract or philosophical debate, bias bounties create a structured process where bias can be systematically surfaced, measured, and addressed.


Humane Intelligence takes a hands-on approach to ensure every bounty is impactful, well-executed, and aligned with our partners’ goals. We combine expertise in bias, sociotechnical research, and data science, and work closely with our organizational partners to co-design each challenge scope, engage the right participants, and evaluate findings in a way that honors impacted communities while also driving technical improvement.
Participants use systematic testing methods to uncover issues like biased training data, discriminatory default settings, and algorithmic blind spots that fail to account for human diversity. Beyond documenting exclusionary patterns, participants also design and develop technical solutions that enhance system performance in real-world conditions.
Humane Intelligence is partnering with Digital Green on two bias bounty challenges on two bias bounty challenges for FarmerChat, a generative AI assistant serving smallholder farmers in sub-Saharan Africa and India. The challenge will ask: how can agricultural AI be shaped by the lived experiences, indigenous knowledge, and practices of women farmers? Participants will engage with multimodal, locally collected and annotated datasets, and seek new pathways to build inclusive AI rather than a one-size-fits-all system.
Humane Intelligence will be launching a tech facilitated gender based violence bias bounty with Tattle Civic Tech. Participants will identify instances of culturally contextual intimate imagery, and architect solutions for more inclusive moderation algorithms.
SPECIAL THANKS
Our first ten challenges were launched
Humane Intelligence partnered with Valence AI and CoNA Lab on a bias bounty challenge focused on accessibility for neurodivergent people in conferencing platforms like Zoom, and on the role of emotion AI detection in shaping those experiences. Participants will be able to choose from a design or machine learning track to identify accessibility gaps and propose improvements.
The challenge dates were:
Design Track
Data Track
Humane Intelligence partnered with Indian Forest Service for this challenge set. In three levels and tracks – thought leadership, beginning technical, intermediate technical – focused on ensuring fair, biophysically informed, and community-driven tree planting site recommendations—tackling bias in AI-driven environmental decision-making.
The challenge dates were:
Humane Intelligence partnered with Revontulet for this challenge set. In two levels – intermediate and advanced – participants focused on counterterrorism in computer vision (CV) applications, centered on far-right extremist groups in Europe / the Nordic region. The goal was to train a CV model to understand the ways in which hateful image-propaganda can be disguised and manipulated to evade detection on social media platforms.
The challenge dates were:
In three levels – beginner, intermediate, advanced – participants designed fine-tune automated red teaming models to explore issues like bias, factuality, and misdirection in Generative AI.
The challenge dates were: