
OpenBox
See, verify, and govern every agent action.
327 followers
See, verify, and govern every agent action.
327 followers
OpenBox provides a trust platform for agentic AI, delivering runtime governance, cryptographic verification, and enterprise-grade compliance. Integrates via a single SDK with LangChain, LangGraph, Temporal, n8n, Mastra, and more. Available to every organization with no usage limits.











Hey Product Hunt, I'm Tahir, co-founder and CTO of OpenBox AI. Today we're thrilled to introduce OpenBox, the trust platform for agentic AI that makes enterprise grade governance available to everyone.
AI agents are now operating across workflows, systems, and organizations at scale. The question every team building with agents faces is the same:
How do you know what your agents are doing
How do you prove they acted within policy
How do you meet compliance requirements without rebuilding your entire stack
OpenBox answers that. It delivers runtime governance, cryptographic verification, and enterprise grade compliance at the point of execution, enforcing identity, authorization, policy, and risk across every agent action and cross system interaction.
OpenBox integrates via a single SDK with no architectural changes to your existing stack. It works natively with LangChain, LangGraph, Temporal, n8n, Mastra, and more.
You get:
Production grade SDK
Cryptographic audit trails
OPA based policy engine
Built in runtime guardrails
Dynamic risk scoring
Human in the loop controls
Full observability from day one
We built OpenBox on the belief that trust should be a right, not a privilege. Every organization deploying AI agents deserves the same governance and accountability infrastructure, whether they are a startup or a regulated enterprise.
That is why the core platform is available in production, with no usage limits and no credit card required.
Would love to hear from everyone building with AI agents today:
What are you building
How are you handling governance
What is missing in your stack
Happy to answer everything here 👇
@tahir_mahmood8 Many congratulations on the launch, Tahir and team. :)
This is one of those products that make me feel why are not enough people building this. Most teams are racing to make agents do more, very few are thinking about “can we prove they behaved correctly?” at the point of execution.
This is the missing “governance layer” in the modern agent stack, similar to how auth/logging became non‑negotiable for web apps.
As someone who works on launches and talks to a lot of SaaS teams, I can see OpenBox becoming the default answer to “how do we ship agents into regulated or high‑risk environments without freaking out security & legal?”.
"human‑in‑the‑loop" can almost always prevent something nasty from going to prod.
Really appreciate this, @rohanrecommends. Thoughtful take.
You are absolutely right that human in the loop can prevent a lot of issues before they reach production. It is still one of the most practical safeguards teams rely on today, especially for high risk workflows.
Our view is slightly complementary. Human review works well, but it does not always scale as agents start operating continuously across systems. What we are trying to enable is a model where actions can be governed and verified at execution time, so teams have guarantees even when a human is not involved in every step.
The aim is not to replace human oversight, but to make it optional rather than mandatory for safety.
Appreciate the support and the perspective.
@tahir_mahmood8 Love it! And that is well communicated through the launch assets. Well done! :)
Cryptographic verification of agent actions is the interesting piece here. What exactly is being signed — the prompt, the tool call, the output, all of the above? And when you say 'verify,' is that post-hoc audit trail or can you actually halt an action mid-execution if it fails a policy check?
Great question@sounak_bhattacharya .
OpenBox signs the execution envelope around an agent action, not just a single element. That can include the prompt context, tool call, inputs, outputs, and the policy decision tied to that step.
Verification isn’t just post-hoc. Policies are evaluated before and during execution, so actions can be halted mid-flow if a check fails, while still leaving a cryptographically verifiable audit trail of what was attempted and why it was blocked.
Triforce Todos
Very good to see you guys live, How are you handling policy enforcement across different agent frameworks without adding latency?
Congrats on the launch BTW 🎉
Thanks@abod_rehman, really appreciate it.
OpenBox enforces policies at runtime across every agent action, with a lightweight SDK that sits alongside agent frameworks rather than inside the execution path.
This allows identity, authorization, and risk checks to happen in real time without blocking the agent, while keeping integrations consistent across different frameworks.
Do you think openbox or other similar tools in future will become a standard layer in every agent stack, like auth or logging today?
Great question@lak7 .
I do think this becomes a standard layer over time.
As agents get more autonomy, teams will need visibility, policy enforcement, and verifiable execution by default, similar to how auth and logging became essential. That’s exactly the layer OpenBox is aiming to provide.
@grover___dev Agreed!
@tahir_mahmood8 @asim_ahmad_cfa @grover___dev Congrats on the launch... lets presume you were to explain this product to someone with minimal technical knowledge as it relates to use case within a business (a business that uses AI but isn't too deep into the compliance / governance side of how this works) - how would you go about outlining the use case.... asking for a friend!
The ELI5 goes like this:
“You know how when you’re at school, a teacher watches over you to make sure you’re doing the right thing — even when you’re really good at your work?
AI is kind of like a really smart kid that can do loads of tasks super fast. But sometimes it makes mistakes, or does something it shouldn’t — and nobody notices until it’s too late.
OpenBox is the teacher. It watches what the AI is doing, stops it if something looks wrong, and keeps a note of everything that happened — so grown-ups can always check.”
Nas.io
I'm wondering how the cryptographic verification works when an agent pulls from multiple data sources with different permission levels in a single workflow?
OpenBox
@nuseir_yassin1 Good question. Our permission enforcement and cryptographic proof are separate layers. Our governance pipeline checks whether the agent is allowed to touch each data source based on its trust tier. Every decision gets hashed and rolled into a per-session Merkle tree, then signed. You can verify what happened with any specific source without exposing other events. The attestation runs async — never blocks the agent.
GrowMeOrganic
Huge congratulations @natsuda_uppapong @phaituly @tonyopenbox on shipping this. How does the cryptographic verification works when you need to halt an action mid-execution, does the signature still get created for the attempted action that got blocked?
Great question, @iamanantgupta !
When an action gets blocked, the cryptographic signature is still created, by design.
Our attestation layer is intentionally decoupled from governance decisions. Signatures don't represent permission, they represent proof. Every action - whether allowed, blocked, or halted -gets recorded in a tamper-proof hash tree and signed when the session closes.
So if a policy blocks an action, the verdict itself becomes part of the cryptographic record. You get an immutable audit trail that proves what happened and what decision was made - not just the happy path. This is critical for compliance and forensic analysis.
Feel free to take a look at our docs, in particular https://docs.openbox.ai/administration/attestation-and-cryptographic-proof