Docs
Guides
Github
Blog
Image

> the open-source LLM red teaming framework_

Get Started
Delivered by
Image
Confident AI
Detect 40+ LLM VulnerabilitiesImage

Automatically scan for vulnerabilities such as bias, PII leakage, toxicity, etc.

SOTA Adersarial AttacksImage

Prompt injections, gray box, etc. to jailbreak your LLM

OWASP Top 10, NIST AI, etc.Image

OWASP Top 10 for LLMs, NIST AI, and so much more out of the box

Documentation
  • Introduction
  • Guides
Articles You Must Read
  • How to jailbreak LLMs
  • OWASP Top 10 for LLMs
  • The comprehensive LLM safety guide
  • LLM evaluation metrics
Red Teaming Community
  • GitHub
  • Discord
  • Newsletter
Copyright © 2026 Confident AI Inc. Built with ❤️ and confidence.
Advertisement