Inspiration
Prompt injection attacks are discovered daily on GitHub and Reddit, yet companies take weeks to respond: security teams manually monitor sources, validate threats, write defensive code, test, and deploy. During that window, AI agents remain vulnerable. We asked: what if an AI could defend other AIs autonomously—discovering, validating, and deploying defenses in real-time while continuously learning from its mistakes?
What it does
AntiVenom is a fully autonomous AI defense system that: Discovers threats: Scrapes GitHub and Reddit via Apify for new prompt injection attacks Validates attacks: Uses GPT-5-mini to analyze effectiveness and classify attack types Generates defenses: GPT-5 creates both human-readable Python code and machine-enforceable JSON rule specs with regex patterns Enforces in real-time: An in-memory defense engine blocks malicious inputs using compiled regex rules Distributes instantly: Streams defenses via Redpanda (3 Kafka topics) to synchronize distributed agents Self-improves: Tracks precision/recall from feedback, automatically refines underperforming rules, and hot-reloads them—all without human intervention The entire pipeline runs autonomously: from threat discovery to enforcement to refinement.
How we built it
Vibe Coding
Challenges I ran into
Not knowing how to use most of the sponsors' tools.
Accomplishments that I'm proud of
Learning how to use Apify and Redpand.
What I learned
Spend more time reading API docs and see if more sponsors' tools can help solve problems.
What's next for AntiVenom
More testing and bug fixes.
Built With
- ai-sdk
- apify
- nextjs
- openai
- react
- redpanda
- shadcn
- tailwind
- typescript

Log in or sign up for Devpost to join the conversation.