The Proof Gap
Verification systems were built for a world where humans did the verifying. Agents are arriving to find the door still locked.
[Note: This is an abridged version of the transcript from a talk focused on examining the infrastructure of institutional trust: the systems, incentives, and design choices that determine how trust is produced, verified, and allocated in digital and physical markets. Earlier essays in The Trust Graph have examined verified versus inferred trust, the production function of institutional credibility, and the governance architecture of digital public infrastructure.]
The implicit diagram every verification system draws
Every verification system draws an implicit diagram. At one end sits the fact. At the other sits the entity that needs the fact confirmed. Between them, a chain of institutions, each vouching for the one before it, extending trust downward from authoritative sources to individuals who receive it but cannot independently generate it. The diagram looks like infrastructure. It functions like a toll road, with the institutions collecting fees, setting hours, and deciding who gets through.
What the diagram conceals is a structural asymmetry that has become, in the age of digital systems, something closer to a design flaw. The institutions are trusted. The individual is not. The institution can confirm; the individual can only be confirmed. And when an institution errs, or is unavailable, or declines to participate, the individual has no instrument of recourse. There is no independent proof the individual can generate and present. There is only the slow, bureaucratic process of convincing each institutional node, one at a time, to correct what it holds.
This is not a minor inefficiency in an otherwise functional system. It is the load-bearing premise of how digital infrastructure relates to truth. Correct it, and you change the architecture of modern institutions. Leave it in place, and you have named the central obstacle to what the next phase of digital infrastructure requires. That phase, in which autonomous agents act at machine speed on behalf of human principals, is not approaching. It has begun. The proof gap, which was a cost imposed on individuals navigating a bureaucratic economy, is becoming a structural incompatibility between the verification architecture we have built and the economic architecture we are building on top of it.
What the gap actually is — and is not
The depth of the problem is best understood not through edge cases but through the ordinary. Consider what it means to prove something as elementary as the completion of a degree. The diploma is a piece of paper that cannot be independently authenticated by a machine. The transcript is the university speaking about you, not you speaking for yourself. The memory of attending is not evidence. The willingness of a former professor to testify is contingent on that professor’s availability and continued employment. None of these constitute proof in the sense that a computational system can verify. There is no cryptographic signature. There is no credential the individual holds, presents, and proves without an institutional intermediary. The information is true, often unambiguously so, and entirely unprovable in machine-readable form.
This is not peculiar to academic credentials. The same structure governs employment history, professional licensing, tax compliance, property ownership, health records, benefit eligibility, and civil registration. In each domain, the facts exist. They are held by institutions. They flow outward only when the institution chooses to confirm them, through channels the institution controls, at timelines the institution sets, in formats the institution specifies. The individual is the subject of these facts but not their custodian. This distinction between subject and custodian is the structural reality that the phrase “proof gap” names. It is not that you do not know your own history. It is that your knowledge of your own history is not evidence, in any system that requires machine-verifiable confirmation.
The phrase “proof gap” names this distance precisely. It is not the distance between truth and knowledge. The facts are generally known, by the individuals who lived them and by the institutions that recorded them. It is the distance between what is true and what can be shown, in a form that computational systems can independently verify, without requiring human intermediaries to vouch, translate, or confirm.
This precision matters because it locates the problem correctly. The proof gap is not primarily an information problem, though it produces information asymmetries. It is not primarily a privacy problem, though it produces privacy failures. It is not primarily a fraud problem, though it enables fraud by ensuring that no one can independently verify whether a claim is authentic. It is an architectural problem: verification systems were designed to route through institutional intermediaries in a way that made sense when cryptographic alternatives did not exist, and the design has not been updated even though the alternatives now exist and are mature. The infrastructure persists in a form calibrated for a different technological era. The consequences compound with each year that the calibration remains unadjusted.
India’s digital infrastructure makes the mechanics unusually legible, because the density of interconnected systems makes error propagation visible at a scale that slower-to-digitize systems obscure. When a government database incorrectly marks a living person as deceased, the consequences propagate immediately and simultaneously: the bank account freezes because of checks against identity registries, UPI transactions fail because authentication fails against the same records, health insurance portals mark the user ineligible, domestic travel becomes problematic because airline KYC systems flag the identity mismatch. The error was singular. The consequences multiply across every system that queries the same authoritative source. Correction, when it comes, requires each of those systems to be individually updated through phone calls, scanned letters, and in-person visits, because there is no mechanism for propagating truth as efficiently as the error propagated. There is no cryptographic certificate of existence. No portable, digitally signed credential anchored to an authoritative registry that can be presented once and accepted everywhere. Just the grinding, repetitive labor of convincing each isolated system, separately, that reality differs from what the database says.
What this reveals is the absence of a portable instrument of proof in the hands of the individual. The facts are in the systems. The correction path runs through the systems. The individual has no independent instrument that can establish truth to any system willing to verify a cryptographic signature. They have documents, testimony, and the patience to navigate bureaucratic correction processes one node at a time. In a world where those bureaucratic processes take days or weeks per institution, and where a single error touches dozens of systems, the proof gap is not a temporary inconvenience. It is a structural disability imposed by the architecture on everyone the architecture touches, and most acutely on those with the fewest alternative resources to compensate for it.
Why the architecture persists past its expiration date
The architecture that produced this gap was deliberate. The original design principle of digital identity systems, formulated when public-key cryptography did not exist as a practical tool, was: do not trust the user; trust the institution. If someone claims to have a degree, verify it with the university. If someone claims to have a licence, verify it with the regulating body. Build the system so that humans verify humans, and use computers to store and retrieve the outputs of those verifications.
This was sensible in the early 1970s. The alternatives did not exist. What has happened since is that the alternatives emerged, became mature, and were not deployed. Public-key infrastructure is now robust, well-understood, and in use across income-tax filing, corporate registry submissions, and e-sign frameworks. Zero-knowledge proofs, which allow one party to prove a fact to another without revealing anything beyond the fact itself, have moved from theoretical constructs to implementable protocols. The cryptographic tools to create systems where individuals can hold, control, and present verifiable proof of facts about themselves, proofs that any machine can verify without an institutional intermediary, exist and are mature. The gap is not technical. It is institutional and economic.
What has not changed is the distribution of economic incentives within the verification ecosystem. Background check companies charge per verification. University registrar offices charge for official transcripts. Licensing boards charge for licence confirmations. Banks charge for credit report pulls. Verification middleware companies aggregate API access to institutional databases and sell it to employers and financial institutions, charging for every query. The verification economy extracts rent from the inefficiency it perpetuates, and it is organized to protect that rent. Changing this requires simultaneously displacing these revenue streams, coordinating standards across thousands of institutions that have no incentive to coordinate, and updating legal frameworks that specify verification procedures in terms of the old system’s mechanics.
Those procedural requirements matter more than they appear. Regulations often specify not just what must be verified, but how: background checks for certain regulated roles must be conducted by licensed agencies; identity must be verified through specific document categories; land records must be certified via specific forms from specific offices. These procedural requirements encode the current architecture into law, creating a second layer of inertia beyond institutional economics. Even if an institution wanted to transition to verifiable credential issuance, the regulatory framework in many sectors would require simultaneous amendment of the rules specifying verification procedures. This is not insurmountable, but it raises the coordination cost of transition from high to very high, and it creates a sequencing dependency: regulatory reform must precede or accompany technical deployment, not follow it.
The cost of this inertia is not theoretical. The hiring process for a professional role in India’s regulated sectors, BFSI, healthcare, or IT services working with overseas clients, routinely consumes between seven and twenty-one days in credential verification alone. Degrees from universities that are slow to respond, employment records from firms that have since dissolved, professional licences issued by state bodies whose online systems are intermittently unavailable: each of these introduces delays that compound. Candidates accept competing offers. Employers lose productivity. The process operates at the pace of institutional response, not at the pace of computation, in an era when computation has otherwise become near-instantaneous. The verification of a bachelor’s degree takes longer than delivering a parcel across the country.
In healthcare, the consequences are more direct. A patient presenting at a new provider has no portable, verifiable medical record. Records held by previous providers are in EMR systems that do not interoperate. Medications prescribed elsewhere are invisible. Allergies documented elsewhere are inaccessible. The physician makes clinical decisions on incomplete information, not because the information does not exist, but because it cannot be retrieved, verified, and acted on in the moment of need. Preventable medical errors remain a significant cause of morbidity globally, and incomplete information is among the leading contributors.
In financial services, the circularity of the verification architecture becomes visible. When a bank verifies your identity, it checks systems that aggregate information from other institutions, which got it from previous verification processes, which traced back to some original document that was issued based on testimony that could not be cryptographically proven. The verification chain is circular: each node is trusted because the previous node trusted it, back to an origin that is not itself verifiable. This works most of the time. When it fails, correcting it requires asking the bureau to ask the bank to check its records to correct the bureau, a loop that takes months and in which the individual bears the burden of proof against an authoritative institutional record.
In property markets, transactions that could complete in minutes routinely take weeks or months. The entire delay is in proving ownership and the absence of encumbrances. The facts exist in sub-registrar records, in court registers, in bank mortgage documentation. None of it is queryable in real time by a machine that can return a verified, tamper-evident answer. Specialized lawyers and title search agencies manually research records going back decades, charging substantial fees to generate conclusions they must hedge with title insurance in case they missed something. Title insurance exists precisely because the proof system is unreliable enough that its outputs require indemnification.
In education, the transfer credit problem makes the structural cost visible in a particularly direct way. Students who move between institutions lose a significant fraction of credits they have already earned, not because the learning did not occur, but because the receiving institution cannot verify it to its own standards and therefore defaults to requiring the student to repeat work. The student knows the mathematics. The competency exists. What is missing is proof in a form the receiving institution can verify without trusting the sending institution’s transcript under conditions of incomplete interoperability. The learning is real. The proof is not.
The pattern across domains is consistent. In every case, the problem is not that the information does not exist. It is that the information cannot be shown, in machine-verifiable form, by the individual who needs to show it.
The technical foundation is not the problem
The technical foundation that would close this gap is understood and available. At its base is public-key cryptography: every issuing institution holds a key pair, signs its credentials with its private key, and publishes its public key so that anyone can verify the signature. This is the same mechanism that secures HTTPS connections, digital signatures on tax filings, and e-sign frameworks. The gap is not in the cryptography. It is in the fact that institutions do not use this mechanism to issue credentials that individuals can hold and present. The signature goes on the institutional record. It does not travel with the individual.
On top of this foundation, the necessary elements are standardized credential formats, a revocation mechanism, digital wallets, and governance structures. Each has technical specifications. The technical specifications are not the hard part.
Standardized credential formats allow a degree credential from one institution to be read, verified, and acted on by any employer, any graduate programme, any licensing board, anywhere, without bilateral integration. The W3C Verifiable Credentials Data Model provides this basis. It is mature enough for production deployment in constrained domains. The issue is adoption, not technical completeness.
The revocation mechanism addresses what happens when the facts underlying a credential change. A medical licence is suspended. A degree is found to have been awarded in error. The institution needs to invalidate the credential without leaving the old one active in verification contexts. The technical solution is a revocation registry to which each credential holds a reference: any verifier checks revocation status before accepting a credential. The governance of revocation is more complex than the mechanism. An institution with unilateral revocation authority has power over individuals that can be exercised without process. The governance framework must specify when revocation is permissible, what notice is required, what appeal mechanisms exist, and what liability falls on institutions that revoke wrongfully.
Before reaching wallets and presentation, it is worth naming a distinction that technical discussions of verifiable credentials frequently collapse, and which the feedback on draft versions of this essay has identified as a genuine gap. There are three separate layers in any working credential system, and conflating them produces architectures that are cryptographically sound but practically inert.
The first layer is credential authenticity: the cryptographic proof that a given credential was issued by a specific institution and has not been tampered with since. This is what public-key signatures provide. It is necessary but not sufficient.
The second layer is issuer legitimacy: the question of whether the institution that issued the credential is recognized, authorised, and trusted within the relevant domain for the type of claim it is making. A cryptographically valid signature on a medical licence credential is worthless if the signing institution is not recognized by the body that governs medical licensing in the relevant jurisdiction. Verifiers cannot simply accept any signed credential from any institution claiming authority to issue it. They need infrastructure that tells them which issuers are legitimate for which claim types, under which governance frameworks, at which assurance levels.
This is the trust registry layer, and it is the most underspecified component of verifiable credential architecture in most public discussions. A trust registry is a published, maintained list of issuer identities, their public keys, the credential types they are authorised to issue, the assurance levels their issuance processes meet, and the governance framework that has recognized them. When a verifier receives a credential, they check not only the cryptographic signature but the registry: is this issuer listed? Are they currently in good standing? Are they authorized to make this specific type of claim? Without this layer, the credential ecosystem fragments into isolated trust domains that cannot interoperate, or it collapses into over-permissive acceptance of any signed credential from any self-declared issuer. The trust registry does not replace cryptographic verification; it contextualises it. The signature answers the question of whether the credential is authentic. The registry answers the question of whether the issuer’s claim of authority is legitimate. Both questions must be answerable for the credential to carry weight.
In large credential ecosystems, trust is not binary and is not established directly between every pair of issuers and verifiers. It is mediated through trust frameworks that specify the rules, accreditation processes that evaluate whether issuers meet those rules, registry infrastructure that publishes the results, and assurance levels that communicate to verifiers what evidentiary standard the issuance process met. A degree credential issued by a university that has been audited under a national higher education accreditation framework carries a different assurance level than a degree credential issued by an institution that has simply registered its public key. The cryptographic signatures may be identical. The trust infrastructure that contextualises them is not.
India’s existing DPI context offers relevant precedents here too. The account aggregator framework specifies which entities can serve as aggregators not merely by requiring them to register, but by specifying technical standards, audit requirements, and ongoing compliance obligations. The result is a trust framework in which verifiers know what an account aggregator designation means, not just that a given entity claims to be one. Verifiable credential infrastructure requires the same kind of trust framework specification, sector by sector, for each domain in which credentials are expected to carry weight. This last point matters: there will not be a single universal trust registry covering all credential types. Education credentials, health credentials, professional licences, and financial credentials will each evolve under separate governance frameworks, maintained by the institutions with legitimate authority in each domain. The goal is not uniformity but interoperability, so that a verifier operating across domains can query the relevant registry for each credential type it encounters without needing a direct bilateral relationship with every issuer.
The third layer is the authorization chain: the delegation structure that expresses who has authorized whom to do what, on whose behalf, under what constraints. This is the layer most specific to the agentic economy, and it is developed in the section that follows.
Digital wallets held by individuals are where credentials reside between issuance and presentation. The wallet is the critical locus of individual control. It stores credentials, manages the cryptographic material required to present them, enforces consent requirements, and produces presentation packages that reveal only the claims the individual has consented to share. An individual who holds a credential with many attributes need not share all of them to prove any one. The wallet produces a selective disclosure presentation that reveals only what the specific transaction requires. Zero-knowledge proofs extend this: an age credential can produce a proof that the holder is above a minimum threshold without revealing the exact birth date. A degree credential can prove completion of the relevant programme without revealing the GPA or the specific institution, if those details are not germane.
This privacy architecture is significantly superior to current practice. Under the existing system, proving eligibility requires revealing far more than eligibility requires. Proving you are above a minimum age means presenting a document that shows your exact birth date, home address, licence number, and other identifying details. The disclosure is always in excess of what the verification requires, because the credential was designed for human review. The human reviewer can be asked to attend only to the relevant attribute. The credential itself carries everything, creating a data minimization failure at scale. Every time you prove your age to access a service, you create a record of your exact birth date, home address, and identifying document number at that service provider, information the provider does not need to perform age verification and that is a liability for both parties if the provider’s systems are compromised.
The consent architecture overlays selective disclosure to address facts that should not be verifiable at all without the individual’s active permission, regardless of how minimal the disclosure would be. Medical information is the clearest case. A hospital can verify identity, insurance status, and emergency contact information with the individual’s consent. It should not be able to verify psychiatric history, reproductive health decisions, or HIV status without specific, contemporaneous consent for those specific disclosures. The credential architecture must encode these consent requirements as constraints on presentation, not as conventions that implementations can choose to honor or ignore. Governance must specify which categories require consent as a legal matter, what the emergency override mechanisms are when consent cannot be obtained, and what liability falls on verifiers who attempt to access consent-required credentials without authorization.
The audit log question is related but distinct. In the current system, verification events are largely invisible to the individual. A background check company calls your university, your previous employers, and your professional licensing boards, and you may not know what was checked, what was found, and what was reported. In a verifiable credential system, the individual is the intermediary: the verifier requests proof from the individual’s wallet, the wallet presents the credential with the individual’s consent, and the individual knows that the verification occurred. This visibility is itself a significant improvement in individual agency. Whether individuals should have a right to know who has verified their credentials beyond the immediate transaction, whether verifiers should be required to notify individuals of verification events, and whether there should be a right to contest verification requests that exceed what is necessary for the stated purpose, are questions currently unresolved in most regulatory frameworks. Their resolution will significantly affect whether verifiable credentials serve individual interests or are instrumentalized against them.
The accountability implications compound when verifiable credentials contain errors. A credential that is cryptographically signed is in some sense immutable once issued. The issuer can revoke it and issue a corrected credential, but the revocation history is visible. An institution with a pattern of issuing and revoking credentials, or of revoking without stated cause, should be legible as a less reliable issuer. The governance infrastructure needs to maintain and surface this information, because the reputation of issuers is itself an input to the trust value of their credentials. A credential issued by an institution with a high revocation rate is worth less than the same credential from an institution with a consistent record of accuracy. And the principle that the burden of proof should run with the data, not with the individual disputing it, inverts the current asymmetry in a way that places institutional accountability in a structurally appropriate location.
The governance structure is the final and most consequential element. Someone must maintain the standards, resolve disputes about credential validity, update the revocation infrastructure, punish issuers who abuse their credential-issuing authority, and adapt the architecture as circumstances change. The governance structure determines whose interests the infrastructure serves and how accountability flows when something goes wrong. Technical infrastructure can be described in specifications. Institutional infrastructure requires the harder work of specifying legitimate authority, representation, enforcement, and revision mechanisms.
India’s existing DPI governance experience is the most relevant model. UIDAI has governed Aadhaar authentication infrastructure under institutional and legal pressure, evolving its rules and enforcement mechanisms in response to challenge. NPCI has governed the UPI stack at billions of transactions while maintaining open access and preventing monopolization of the transaction layer. The account aggregator framework has established a consent-based data flow architecture with regulatory backstopping from the RBI, specifying which entities can serve as aggregators, what standards they must meet, and what legal standing their outputs have. Each of these involved multi-stakeholder coordination, regulatory mandate, and the willingness to specify governance formally rather than leaving it to informal norms. Each also involved a willingness to treat digital infrastructure as a public good requiring public governance, not a market product that private actors will govern adequately through competition alone. Verifiable credential infrastructure requires the same orientation. The question of who sits on the governance bodies, what authority they have, and what accountability they face is as consequential as the question of which cryptographic primitives the system uses. Both questions have answers. Neither has been adequately addressed.
The agentic economy makes the proof gap structural, not incidental
The verification problem was already consequential when the entities doing the proving were humans. It becomes structurally incompatible with economic activity when the entities doing the proving are AI agents. This distinction is not gradual. It is categorical, and the category shift is already underway.
The verification infrastructure we have built is calibrated for approximately eight billion human actors, each generating credential events at comprehensible human rates: a degree every few years, a licence renewal at defined intervals, an employment change periodically. The institutional verification architecture, however inefficient, mostly processes these within timeframes that feel slow but do not prevent economic activity. A background check that takes two weeks delays a hire. It does not prevent the economy from functioning.
AI agents operate at different rates entirely. An autonomous agent managing procurement for a mid-sized enterprise might execute hundreds of vendor interactions daily. An agent handling customer service queries might present authorizations thousands of times per hour. An agent operating within a financial workflow might need to prove delegated transaction authority millions of times across a working day. The verification events are not occasional. They are continuous, concurrent, and require millisecond resolution. The institutional verification architecture, which routes each event through a human-accessible intermediary at timescales measured in seconds to weeks, cannot process this. Not slowly. Not inefficiently. Not at all.
The failure mode is not that agent-driven verification becomes expensive. It is that agent autonomy collapses back into human supervision. An agent that must wait for institutional confirmation at each step of its workflow operates at human speed, which eliminates most of the value of deploying the agent. The choice, in the absence of cryptographic proof infrastructure, is between agents that are autonomous and unverifiable, or agents that are verified and human-supervised. Neither is the agentic economy. The agentic economy requires agents that are simultaneously autonomous and cryptographically verifiable.
The delegation problem makes this harder still, in ways that have no clean human analogue. When a human acts on behalf of another human, the authority relationship is expressed through legal instruments developed over centuries: power of attorney, employment contracts, board resolutions, regulatory authorizations. These are imperfect, but they are comprehensible, auditable, and legally interpreted. When an AI agent acts on behalf of a human, or on behalf of an institution, or on behalf of another AI agent, the authority relationship needs a machine-readable, machine-verifiable expression. The agent needs a delegation credential: a digitally signed statement that this principal authorizes this agent, to this scope of action, with these constraints, revocable under these conditions.
Without delegation credentials, the alternatives are all worse. Either humans must continuously supervise and approve agent actions, which eliminates autonomy and returns the system to human-speed verification. Or agents operate without verifiable authorization, which creates accountability gaps that become significant when agents enter contracts, spend money, access sensitive data, or take actions with real-world consequences. Or informal trust systems develop within closed ecosystems, with agents trusting each other based on platform membership rather than cryptographic proof, which functions within the ecosystem but fails when cross-ecosystem interaction is required, which is precisely when verification matters most.
The delegation chain compounds as architectures grow in complexity, and the complexity of agent architectures is already growing faster than the governance frameworks designed to contain it. An enterprise deploying AI agents at scale has: employee credentials issued by the enterprise establishing the employee’s authority, tool credentials issued by tool providers establishing the tool’s verified capability, delegation credentials issued by employees to specific agent tools scoping what those tools may do on their behalf, and sub-delegation credentials issued by those tools to sub-agents within automated workflows. The complete authorization chain for a single agent action may traverse four or five credential issuers. The chain must be traceable and auditable, because accountability in an agentic economy depends on it entirely. If an agent takes an action that causes harm, determining who authorized that action, to what scope, under what constraints, and whether the action fell within those constraints, requires following the delegation chain to its principal. If that chain is expressed in verifiable credentials, the audit is automatic and tamper-evident. If it is expressed in informal configuration files and runtime permissions, the audit is reconstruction, contested at the moment it matters most.
The market structure of trust in agent networks creates an additional pressure that is distinct from the human credential case. In a market where agents interact with each other autonomously, the ability to cryptographically verify another agent’s authorization becomes a competitive prerequisite, not a compliance requirement. An agent that cannot prove its authorization to a counterparty cannot complete transactions with that counterparty. An agent that can prove authorization instantly, cryptographically, to any counterparty that accepts the standard, has a structural advantage in any market where trust is a prerequisite for interaction. This creates adoption incentives that do not exist in the human credential market, where trust can often be approximated through reputation and repeated interaction. In agent markets, where interactions may be one-time, cross-ecosystem, and initiated at machine speed with no time for reputation assessment, cryptographic proof is the only practical instrument of trust. The market pressure for verifiable credential adoption in agent networks is therefore stronger than the market pressure in human credential markets, and it arrives faster because the agent deployment timeline is compressing rapidly.
What this implies for infrastructure design is that delegation credentials are not an optional extension to verifiable credential architecture. They are a first-class requirement for the agentic economy. A credential system designed for human credential presentation that does not accommodate delegation chains is not fit for purpose. The W3C Verifiable Credentials specification provides some of the necessary primitives, but the specific semantics of delegation, scope expression, constraint encoding, and chain verification require specification work that has not been completed to the standard necessary for interoperable deployment at agent scale.
The governance of AI agent credentials requires treatment that differs categorically from the governance of human credentials. Human credentials are issued to individuals with legal standing who can consent, dispute revocation, and exercise rights under data protection law. AI agent credentials are issued to software processes with no legal standing, that cannot consent in any meaningful sense, and whose behavior is determined by training and runtime configuration rather than autonomous choice. The human credential governance framework, built around individual rights and consent, does not extend to this case.
What AI agent governance requires instead is accountability through transparency: the credential encodes not just authorization but the scope, constraints, and principal chain of that authorization, and any verifier can inspect all of this before accepting the agent’s action as legitimate. If an agent presents a delegation credential authorizing it to make purchases up to a specified limit on behalf of a named principal, the verifier can confirm the principal’s identity, the authenticity of the delegation, the scope constraints, and the current validity of the credential, all in a single cryptographic verification taking milliseconds. The architecture transforms accountability from an after-the-fact audit into a before-the-fact gate. This is not a minor improvement in the efficiency of oversight. It is a redesign of how accountability is expressed, from reconstruction after events to verification before them.
The forgery dimension of the agentic economy is the third pressure, and it changes the character of the proof gap in a way that is irreversible. Generative AI has moved the production of convincing synthetic documents, images, audio, and video from technically specialized work to routine capability. The threshold at which a forged document is indistinguishable from an authentic one, for a human reviewer, has already been crossed in many categories. The trajectory points toward a world in which any non-cryptographically-signed artifact must be treated as potentially synthetic. A document with no cryptographic signature proves nothing that cannot equally be asserted by a well-prompted model.
This is not a future risk to be managed through policy. It is a present condition that is accelerating. Verifiable credentials are not one response among several to the generative AI challenge. They are the only response that works at machine speed and at the scale the agentic economy requires. The logic is direct: if a credential is cryptographically signed by a key under institutional control whose public key is publicly registered, the credential’s authenticity can be verified by any machine performing a cryptographic check. A generative model cannot produce this signature without access to the private key, and the private key is a security property of the issuing institution, not a computational challenge.
The evidentiary standard for digital information changes as a consequence. Cryptographically signed information is provably authentic or provably forged. Unsigned information is indeterminate. In a world of routine forgery, indeterminate provenance is not an acceptable epistemic condition for consequential decisions. The shift from human review to cryptographic verification is not a preference for technical elegance over human judgment. It is a recognition that human judgment cannot scale to the volume or speed of machine-generated content, and that indeterminate provenance in an agentic economy is not merely inconvenient. It is a governance failure waiting to become a liability crisis.
The specific pathology that emerges when agents operate in environments without cryptographic proof infrastructure is worth naming precisely, because it is already visible in nascent form. Agents that cannot prove their authorization cryptographically develop informal trust through repeated interaction and reputation within closed ecosystems. Those ecosystems become walled gardens: agents inside the garden trust each other, agents outside cannot interact without human intermediation. The efficiency gains of agent autonomy are realized only within the walls. Cross-ecosystem interaction, which is where most of the economic value of agent networks lies, requires human oversight that reintroduces the latency and cost that autonomy was supposed to eliminate. The walled garden is the failure mode of an agentic economy without proof infrastructure. It is also, notably, a comfortable outcome for the large platforms that operate the walls.
The economic logic of transition — and who resists it
The economic model for verifiable credentials inverts the current pricing structure in ways that clarify why the transition faces resistance. In the current verification architecture, every verification is a billable event. The institution that maintains the record charges for access. The middleware company that aggregates access charges for queries. The background check company that compiles results charges for reports. The entire economic structure of the verification market rests on per-verification pricing.
In a cryptographic credential system, verification is not a billable event. Anyone can verify a cryptographic signature using the issuer’s public key, which is freely available. The marginal cost of verification approaches zero. This eliminates the revenue basis for verification middleware companies entirely. It substantially reduces the revenue from transcript and licence confirmation services. The entities that benefit most directly are verifiers: employers, lenders, government agencies, healthcare providers. They currently pay for verification and bear the cost of verification delays. Under a cryptographic credential system, their verification costs drop to near zero and their verification latency drops to milliseconds.
This cost-benefit asymmetry is the economic root of transition friction. It is not merely that institutions resist change. It is that the institutions who must act first, building credential issuance infrastructure, bear the implementation costs, while the benefits accrue primarily to verifiers who bear no issuance obligation. This is the classic structure of a market failure: the costs of action are concentrated and immediate, the benefits of action are distributed and deferred, and the incumbents who profit from the status quo are organized while the beneficiaries of change are not. Left to voluntary adoption dynamics, this structure produces slow transition. It does not produce no transition, because the competitive dynamics of early adoption eventually create pressure on laggards, but it produces a transition measured in decades rather than years.
Regulatory mandate is the fastest mechanism and it is tractable in heavily regulated sectors. Professional licensing bodies have a regulatory mission directly served by making licence verification faster and more reliable. Mandating that they issue verifiable credentials is a policy instrument, not a technical innovation. Healthcare providers under national health schemes, educational institutions receiving government funding, and banks under RBI oversight are all within regulatory reach. The political economy friction in these sectors comes not from the regulated entities themselves but from the verification middleware companies that have built businesses on API access to their registries. Displacing those incumbents requires regulatory clarity that verifiable credential presentation is a legally sufficient substitute for traditional verification methods. The precedent for this kind of regulatory clarity is not absent: the RBI’s account aggregator framework specified precisely which entities could serve as aggregators, what standards they had to meet, and what legal standing their outputs had. The same kind of specification is required for verifiable credential infrastructure.
Government funding of issuance infrastructure applies the public goods argument with particular force in the Indian context. India has already demonstrated willingness to fund shared digital infrastructure when the public interest case is clear: UPI, Aadhaar, FASTag, CoWIN, DigiLocker, and the account aggregator framework are all public investments that created shared infrastructure on which private competition could operate. The incremental investment required to add verifiable credential issuance to existing government registries, UIDAI, PAN, GSTN, ABHA, Academic Bank of Credits, is modest relative to the value of the resulting capability. The framing is not replacement of existing infrastructure but extension of it, which matters for political viability: institutions that have already made the commitment to DigiLocker integration or ABC are not being asked to abandon that investment but to deepen it.
The network effect structure means that domain-level tipping is more likely than gradual adoption across all domains simultaneously. Once enough employers in a sector accept verifiable credentials, enough universities face competitive pressure to issue them, and the domain tips. Education credentials are likely to tip first. Employers in regulated sectors pay substantial sums and experience substantial delays for verification that verifiable credentials would make instant. India’s Academic Bank of Credits framework provides an existing scaffold for this transition: ABC is designed to allow students to accumulate and transfer credit across institutions, which is the same coordination problem verifiable credentials solve. The friction in early adoption for education comes from the incentive misalignment: the benefit accrues most visibly to graduates, in the form of faster verification by employers, while the implementation cost is borne by universities. In markets where prestige is the dominant credential signal, the reputational benefit of verifiable credentials may be marginal for elite institutions. In markets where graduates compete on demonstrable competency rather than institutional brand, the benefit is significant and the adoption pressure follows.
Professional licensing follows a more aligned incentive structure, because the body’s mission is directly served. Healthcare is technically the most impactful domain and institutionally the most complex, and the two properties are related: the impact is high because incomplete information at the point of care is a direct contributor to preventable harm, and the complexity is high because no single actor has authority to mandate adoption across the full institutional landscape. The ABHA framework provides an anchor, but anchor identifiers alone do not solve the interoperability problem. What is required is a combination of standardized credential formats for health data, regulatory mandate for credential issuance by covered entities, and the account aggregator consent framework extended to health credential flows. Property is the domain where economic stakes are highest and institutional resistance is most concentrated. The efficiency gains from cryptographic property registration are large enough that regulatory mandate will eventually arrive; what is missing is not technology but the political appetite to mandate the transition over objection from legal professionals and title search firms whose business models depend on the inefficiency.
The transition period itself requires specific design attention. During it, both systems must operate in parallel. An employer that accepts verifiable credentials must also accept traditional transcripts, because not all universities will have implemented verifiable issuance simultaneously. A healthcare provider must use verifiable records when available and fall back to existing mechanisms when they are not. A government portal must accommodate both cryptographic credential presentation and legacy document submission while the issuing institutions build out their infrastructure. The bridge systems required for this parallel operation must be designed to create adoption incentives rather than to entrench the old system by making continued use of it costless. If early adopters experience immediate, concrete benefit — faster hiring, lower verification costs, reduced administrative burden — the network effect works in favor of adoption. If they experience no benefit until critical mass is achieved, the transition will be protracted. Designing the transition to make early adoption clearly beneficial is a governance task, not a technical one, and it requires deliberate specification of which verification contexts will accept verifiable credentials from day one.
Who controls the infrastructure determines what it does
The governance question will determine whether verifiable credential infrastructure functions as a tool of individual empowerment or as a more sophisticated architecture for institutional control. The technical layer is neutral with respect to this question. What determines the answer is who controls the issuing keys, who controls the revocation lists, who has access to verification logs, and who can modify the standards.
Three archetypal models exist. In the government-controlled model, the state issues and manages credentials and operates core infrastructure. The advantage is universality. The risk is surveillance: a government operating both the credential issuance and verification infrastructure can observe, at granular detail, who is verifying what about whom and when. The fact that digital identity systems are increasingly linked to state authority over economic participation, as welfare eligibility, tax compliance, and financial access become tied to identity verification, makes this risk structural rather than theoretical.
In the corporate-controlled model, technology companies provide wallets, credential management, and verification services. The advantage is competitive pressure on user experience. The risk is that business models diverge from user interests: companies monetize credential data, create lock-in through proprietary formats, and serve shareholders rather than the individuals whose credentials they hold. The history of platform markets suggests that corporate control of credential infrastructure without regulatory backstopping produces predictable pathologies.
In the decentralized model, individuals run their own infrastructure using open protocols. The advantage is genuine individual control. The risk is that most people cannot and will not run personal cryptographic infrastructure, creating a system that is theoretically sovereign but practically accessible only to technically sophisticated users.
The realistic architecture is a hybrid, and specifying the hybrid is the governance work. Governments maintain high-trust root credentials: citizenship, civil registration, national identity. These are the claims for which state authority is genuinely appropriate, because they derive their validity from the state’s legal monopoly over civil registration. Domain institutions maintain credentials within their authority: universities issue degree credentials, licensing boards issue professional credentials, employers issue employment credentials, hospitals issue health credentials. Technology companies compete to provide wallet software and user interfaces, under portability requirements that prevent lock-in. Open protocols and mandatory interoperability requirements ensure that no single actor controls access to the credential namespace. Multi-stakeholder governance bodies maintain standards, resolve disputes about trust framework membership, and adapt the architecture as circumstances change. Legal frameworks, probably including a right to receive verifiable credentials for facts an institution has authority over, ensure that the infrastructure does not remain voluntary for institutions that prefer opacity.
The portability requirement deserves specific emphasis because it is the element most likely to be compromised in practice. If you hold credentials in one wallet, you should be able to export them and import them into another wallet without asking permission from the wallet provider, the credential issuer, or any other party. This prevents wallet lock-in and ensures that your credential portfolio is genuinely yours, not contingent on a continuing relationship with a specific technology provider. Data portability regulation in some jurisdictions provides a partial framework, but credential portability requires additional specificity because the technical mechanism for credential transfer differs from the mechanism for data transfer. Wallet providers that build lock-in into their architecture, through proprietary credential formats or export restrictions, are replicating at the wallet layer the same institutional gatekeeping that the credential architecture is supposed to displace. Preventing this requires explicit portability obligations, enforced as conditions of operating in the credential ecosystem.
The extension of governance frameworks to AI agents requires treatment that goes beyond adapting human credential governance. When AI agents hold or present delegation credentials, the governance questions become: who is responsible when an agent exceeds its delegated authority? What revocation mechanisms exist for agent credentials that have been compromised or misused? How are the principals of an agent identified and held accountable when the agent takes harmful actions? These questions do not have answers in existing legal frameworks, because AI agent credentials do not have a legal category. Creating that category, with associated liability rules and accountability mechanisms, is one of the more consequential governance tasks of the next decade. The alternative is an agentic economy in which authorization is unverifiable and accountability, when something goes wrong, is contested rather than traceable.
Accountability in this architecture requires new legal categories more broadly. Verifiable credentials do not have a legal category in most jurisdictions. Extending existing frameworks requires a legal definition of a verifiable credential, a legal obligation on institutions to issue verifiable credentials upon request for facts within their authority, legal recognition of cryptographic verification as equivalent to or better than traditional methods, safe harbor provisions for verifiers who rely on cryptographically valid credentials in good faith, and liability rules for issuers of credentials that contain errors. The principle that the burden of proof should run with the data, not with the individual disputing it, inverts the current asymmetry and places institutional accountability in a structurally appropriate location. These are not legislative novelties that require invention from first principles. They are extensions of existing consumer protection law, data protection law, and administrative law, applied to a new category of digital artifact. The legislative task is tractable. What it requires is the political recognition that verifiable credentials are infrastructure, not product, and that infrastructure governance is a public responsibility.
The infrastructure moment is present, not approaching
The infrastructure moment is often named after the technology that enables it: the railway era, the electrification era, the internet era. These names are retrospective; they become available only once the infrastructure has embedded itself in the economy to the point where its absence is unimaginable. The naming obscures the period of choice, the years or decades in which the infrastructure could have been built differently, or not built at all, or built in ways that entrenched existing power structures rather than challenging them.
The choices made in the design of digital identity infrastructure in the 1970s through the 1990s, to center trust in institutions rather than in cryptographic proof, to build systems where individuals are subjects rather than agents, were not wrong given the technical constraints of the time. They are wrong given the technical capabilities of the present. The constraints that justified the original design have been lifted. The design persists because incumbents profit from it and because the coordination required to change it is high.
This is the standard description of a locked-in equilibrium: the transition cost is concentrated and visible, the benefit of the status quo is distributed among institutions that can organize to protect it, and the distributed beneficiaries of change are not organized around this specific infrastructure question. Locked-in equilibria in infrastructure markets are typically disrupted by one of three mechanisms: technological discontinuity that makes the incumbent architecture no longer viable at any cost, regulatory mandate that forces adoption regardless of incumbent preference, or the emergence of a new entrant that does not carry the incumbency costs and builds the successor architecture from scratch. All three mechanisms are now present in the verifiable credential transition, and their relative timing will determine how quickly the gap closes and how much unnecessary friction is generated in the interim.
The technological discontinuity mechanism is already operating. The scale requirements of AI agent deployment are not accommodable by the current verification architecture. An AI agent economy that requires millions of authorization verifications per second is structurally incompatible with a verification architecture that routes each verification through institutional intermediaries. This is not a marginal stress on the system that could be addressed by incremental improvement. It is a categorical mismatch between the architecture and the workload. The discontinuity is not approaching; it is present. The lag between recognizing the architectural incompatibility and acting on it is the question of timing, and the timing is now partly determined by how quickly agent deployments scale to the point where the incompatibility becomes undeniable to the institutions that have most to lose from acknowledging it.
The regulatory mandate mechanism has precedent in the DPI context. MeitY, the RBI, SEBI, and other regulatory bodies have demonstrated willingness to mandate interoperability standards and technical frameworks when the public interest case is sufficiently clear and sufficiently organized. The public interest case for verifiable credential infrastructure is at least as clear as the case for UPI interoperability or account aggregator framework adoption. What it currently lacks is the organized advocacy that made those regulatory interventions legible as priorities. The verification economy’s incumbents are organized around protecting their revenue streams. The constituency for verifiable credentials, distributed across employers seeking faster hiring, individuals seeking portable credentials, healthcare providers seeking interoperable records, and AI developers seeking agent authorization infrastructure, has not yet organized around a common policy ask. That organizational task is as important as any technical specification work.
The new entrant mechanism is visible in pilots that have already deployed verifiable credentials in constrained contexts. Educational institutions that have issued digital diplomas under open standards, jurisdictions that have piloted verifiable driver’s licences, professional bodies that have issued cryptographically signed credentials to their members: all of these establish the existence proof and create competitive pressure for broader adoption. As the pilots expand and the verifiable credential infrastructure becomes more widely adopted, institutions that have not adopted face competitive pressure from those that have. The new entrant mechanism in this context is not a single company displacing an incumbent. It is the leading edge of institutional adoption creating pressure on laggards across a sector, one institution at a time, until the remaining holdouts find themselves isolated rather than protected by the status quo.
The regressivity of the current system is worth naming precisely before the closing argument, because the equity argument is often subordinated to the efficiency argument in infrastructure discussions, and it should not be. The verification overhead that wealthy individuals and large institutions can absorb, through lawyers, specialized services, and institutional relationships, falls most heavily on those who cannot afford these. A migrant worker who changes states loses access to institutional verification relationships built in the origin state, because the verification systems in the destination state query different databases through different channels. A worker in the informal economy who has built genuine skills outside formal credentialing systems has no verifiable proof of those skills at all, because there was no institution positioned to issue the credential that the verification system would recognize. A refugee whose credential-issuing institutions no longer function cannot prove their qualifications to any system that requires institutional confirmation. The proof gap is not merely inefficient. It is regressive in a specific and structural way: it imposes the highest costs on people with the least capacity to absorb them, and it routes economic opportunity through institutional gatekeepers who serve those already within the institutional perimeter more reliably than those outside it. Verifiable credential infrastructure does not solve these problems by redistribution. It solves them by changing the architecture so that the individual holds proof and controls its presentation. The credential travels with the person. The cryptographic signature is readable by any machine with the issuer’s public key. The verification cost falls to zero for any verifier that accepts the credential format. The answer that serves both efficiency and equity is the same: proof should reside with the person who needs to prove things.
The urgency is set not by the maturation of the technology, which is already sufficient, but by the acceleration of the conditions that make the current system increasingly untenable. AI agent deployment scales the volume of verification events beyond what institutional intermediaries can process at any acceptable latency. Generative content production scales the rate of forgery beyond what human judgment can evaluate at any acceptable error rate. Digital economic participation scales the stakes of proof failures for individuals who cannot navigate bureaucratic correction processes on institutional timelines. Each of these pressures is independent of the others. Together, they constitute a compounding argument for treating proof infrastructure as a public priority rather than a technical optimization left to voluntary adoption dynamics.
The distance between what is true and what can be shown is not a natural feature of how knowledge works. It is an artifact of how verification systems were designed half a century ago, under constraints that no longer apply, by architects who could not have anticipated public-key cryptography becoming routine, or AI agents becoming economic actors, or digital participation becoming prerequisite for economic citizenship. Those architects made reasonable choices given what they knew. The institutions that maintain those choices today, knowing what is now known, are making a different kind of choice. The infrastructure moment for verifiable credentials is not approaching. It is present. The question of whether the institutions positioned to close the gap will act is also a question of what kind of digital infrastructure we are building — and for whom — and on what assumptions about the relationship between individuals, institutions, and proof. Those assumptions were formed in one technological era. They are being stress-tested by the next. Whether they survive that contact, or are replaced by something more adequate to what the economy now requires, is the decision that sits inside what looks, from the outside, like a technical standards question.


