cheqd’s cover photo
cheqd

cheqd

Software Development

The Payment and Trust Infrastructure for Credentials. Building the Trusted Data and AI Agentic economies. $CHEQ

About us

The Payment and Trust infrastructure for credentials. Building the Trusted Data and AI Agentic economy. Your Data 🆔 Verified 👌 Portable 🎒 Private 🔑

Website
https://www.cheqd.io
Industry
Software Development
Company size
11-50 employees
Headquarters
London
Type
Privately Held
Founded
2021
Specialties
Blockchain, Self-Sovereign Identity, Web3, and Decentralized Identity

Locations

Employees at cheqd

Updates

  • Traditional non-human identities like service accounts and static API keys were built for predictable, single-purpose tasks. But AI agents in 2026 are decision makers that reason, delegate, and act autonomously across systems. The old playbook no longer fits. Gartner predicts that by 2026, 30% of enterprises will rely on AI agents that independently trigger transactions and complete tasks on behalf of humans or other systems. These agents are dynamic, ephemeral, and often self-directed, very different from static non-human identities. Legacy identity and access management systems struggle with this new reality. They were designed for long-lived, narrowly scoped accounts, not for fluid agentic identities that operate across domains, spawn sub-agents, and require real-time policy evaluation. A blog by Strata argues for treating AI agents as first-class digital employees. This means applying Zero Trust principles with passwordless authentication, just-in-time provisioning, OAuth-based delegation, and human-in-the-loop approval for sensitive actions. The proposed agentic user flow includes clear steps: authentication of the delegating human or agent, trust binding, intent verification, discovery via protocols like MCP, dynamic provisioning, policy enforcement, and full observability with tamper-evident logs. The time for static identity is over. Enterprises that adopt a modern agentic identity playbook, with decentralised identifiers, verifiable credentials, and dynamic governance, will be able to scale AI agents safely and responsibly. Full article: https://lnkd.in/eMx3v7KA

  • View organization page for cheqd

    2,988 followers

    Identity governance for AI systems is shifting emphasis toward enforcing explicit data boundaries and delivering continuous auditability for autonomous operations. New research from Okta reveals a clear tension in enterprise AI adoption. https://lnkd.in/eeRw-d5G While 86% of IT and security leaders say AI agent workflows are very important or mission-critical to their strategy, security concerns are holding back deployment. 69% of respondents report that security issues are actively slowing down AI agent adoption. At the same time, 57% describe the effort required to secure disparate agents, apps, and workflows as high. The identity gap is striking. Only 27% of leaders believe their current identity systems are fully equipped to govern non-human identities like AI agents at scale. Top security worries include data leakage or exfiltration (83%) and over-privileged access (80%). Enterprises want better least-privilege enforcement, centralised governance, and strong auditability. Looking ahead, 98% of SaaS decision-makers will factor AI agent controls into renewal decisions to some degree, with 17% saying it will be a significant requirement. 95% believe a standardised protocol would increase their confidence in deploying agents.6/6 The message is clear: AI agents are strategically vital, but robust identity and access management is now the deployment gate. Verifiable, decentralised identity infrastructure will be key to closing this gap and enabling safe scaling. Full survey: https://lnkd.in/eeRw-d5G

  • Privacy enhancing methods are playing a larger role in decentralised identity frameworks as AI agents take on sensitive or regulated workflows. Techniques such as selective disclosure allow agents to share only the minimum information needed for a specific interaction. 👇 This stands in contrast to conventional systems that often require broader sharing or centralised storage of credentials. Agents can demonstrate compliance with policies or access requirements while organisations maintain records that align with principles of data minimisation. The approach helps organisations meet expectations around purpose limitation and rights protection in an increasingly autonomous environment. In 2026, programmable privacy features are transitioning from experimental concepts to core operational requirements for trustworthy agentic deployments. cheqd enables organisations to implement them effectively. @OrochiNetwork share their thoughts on verifiable data infrastructure, compliance, and governance with AI agents. https://lnkd.in/evJ6UNBN

  • Enterprise reports by Strata from early 2026 reveal a consistent pattern: high enthusiasm for deploying AI agents is not yet matched by mature governance frameworks. A survey of 285 IT and security professionals found that only 18% of security leaders are highly confident their current identity and access management systems can effectively handle agent identities, with 18% reporting little to no confidence at all. When agents operate without dedicated identities, they frequently inherit broad or unclear permissions. Only 23% of organisations have a formal, enterprise-wide strategy for agent identity management, while nearly 80% cannot tell in real time what their agents are doing or who is responsible. As agent deployments expand through 2026, treating them as first-class identity principals becomes essential for both risk management and operational confidence. cheqd delivers the practical infrastructure to achieve this. $CHEQ https://lnkd.in/eh77Jh7f

  • Recent surveys from @GraviteeIO indicate that while the vast majority of enterprises have moved beyond planning stages for AI agents, only a small proportion have achieved full governance approval for production use. 👇 Many continue to rely on shared credentials or temporary tokens originally intended for human users. cheqd provides a more robust foundation by assigning each AI agent its own decentralised identifier and verifiable credentials. This establishes agents as independent principals with clearly defined permissions and traceable actions. Organisations can move away from fragile interim solutions and lay the groundwork for accountable, scalable systems. Read more here. https://lnkd.in/dpx9zTcD

    • No alternative text description for this image
  • There's a useful analogy for AI agent identity: DNS tells you who owns a domain. PKI proves you're talking to the right server. AI agents need the same: → a registry that says who they are → cryptographic proof that they are who they claim Decentralised identifiers, verifiable credentials and Trust Graphs are that infrastructure. The internet solved this for machines. Now we need to solve it for agents. $CHEQ https://lnkd.in/eD8i_rh8

  • AI is entering its infrastructure phase. Which means the question shifts from: "What can this model do?" To: "Can this system be trusted?" 🧵 Infrastructure phases are when the real standards get set. TCP/IP. HTTPS. DNS. These weren't the exciting part, but they're what everything else runs on. Verifiable credentials, DIDs, and Trust Graphs are the identity infrastructure layer for the AI era. They're not glamorous. But without them, nothing else scales safely. The companies and developers building on this infrastructure now will define how the next decade of AI deployment works. Not just technically. Legally, regulatorily, commercially. 2026 is the year to get this right. The tooling exists. The standards are converging. The regulatory deadlines are set. What's left is building. cheqd is ready. Are you? https://lnkd.in/eEXs7SDz

  • Deepfakes are now being used to bypass identity verification systems. Security teams are warning that AI-generated identities are passing basic checks. In some cases, attackers are: • generating synthetic faces • producing fake ID documents • creating full digital personas. Traditional identity systems were not designed for AI-generated humans. We need systems that verify: ✔ provenance ✔ credentials ✔ issuer trust. $CHEQ Verifiable credentials create cryptographic proof, not just visual checks. And that’s exactly what AI-age identity requires. Source https://lnkd.in/eUDMrxXZ

  • Gartner notes that by 2028 a significant majority of governments plan to deploy AI agents for routine decision making, yet fragmentation and legacy systems remain major hurdles. According to a Gartner survey of 138 government organisations, two major barriers remain. 41% cited siloed strategies. 31% pointed to legacy systems as significant obstacles to AI adoption. By 2029, Gartner expects 70% of government agencies to require explainable AI and human oversight for automated decisions affecting citizens. The report stresses the need to “balance automation with accountability and fairness.” Read the report here. https://lnkd.in/eWBXsF_E Security leaders are recognising that AI agents introduce new demands on identity and access management, particularly around registration, credential automation, and policy driven authorisation for machine actors. cheqd supports adaptive governance by linking agent actions to verifiable credentials that can be evaluated against real-time policies. The Trust Graph adds contextual awareness, allowing organisations to balance innovation with necessary protection in increasingly autonomous settings. https://lnkd.in/e498-MFV

Similar pages

Browse jobs

Funding

cheqd 2 total rounds

Last Round

Seed

US$ 2.6M

See more info on crunchbase