Against Proofware
The Case for Ambiguity, Friction, and Human Trust
The seduction of verifiability
Every technological era has its fatal attraction. In ours, it is verifiability—the belief that if only every datum could be signed, every action logged, every decision traceable, then society would finally behave. That belief is seductive: it promises certainty, accountability, and transparency. But it also risks replacing trust with trace logs, human judgment with machine registries, and moral ambiguity with procedural clarity. The current trends in digital identifier specifications and standards argues persuasively for a world where intelligent systems are auditable, actors credentialed, and processes provable. But here is the counter-argument: a society that over-engineers trust risks extinguishing it.
The map is not the territory
Verifiable credentials, decentralized directories, proof-carrying outputs—they all rely on a dangerous premise: that reality can be captured fully in structured data. In practice, registries degrade, schemas become obsolete, upstream data is gamed, and credentials embed biases of their issuers. The more comprehensive we attempt to make our schemas, the more brittle they become. A registry may validate a company’s existence, but it cannot verify that the company is acting ethically. Proof systems guarantee consistency, not truth. The anthropologist Geoffrey C. Bowker once warned that “to classify is to control.” (PMC) When every classification is written in code and signed on a ledger, the act of control becomes invisible yet absolute. The problem is not malice—it is premature certainty.
The bureaucratization of code
Agentic AI governed by cryptographic proofs sounds like liberation from paperwork, but the logic is similar to bureaucracy: rules, approvals, audit trails. Instead of human clerks we get circuits executing steps. The promise of “governance as code” can quietly become governance by code, a regime where deviation is disallowed, exceptions are punished, and discretion is outlawed. Human bureaucracy at least allows persuasion, empathy or discretion. Machine bureaucracy offers none. A rigidly provable system may protect against corruption, but it will also stifle experimentation, forgiveness, and creativity—the very qualities governance should preserve. In short: proofware might eliminate cheating—but it might also outlaw mercy.
The myth of neutral infrastructure
Proponents of digital public infrastructure (DPI) often describe it as neutral rails. In reality, every infrastructure is political. Choosing which credentials count, which directories are authoritative, which registries interoperate, and which proofs are mandatory are all political decisions. When the architecture of trust is embedded into national stacks, blockchain anchors, and smart contracts, the political debate about governance becomes a technical debate about APIs. We risk creating algorithmic constitutions—governance rules enforced not by law but by protocol. When code is law, who audits the auditors? (arXiv)
The illusion of inclusion
Digital personhood and verifiable entity credentials promise inclusion through recognition. But recognition itself can exclude. Those without credentials—migrants, informal workers, community groups—risk becoming invisible to systems equating legitimacy with verifiability. A village cooperative that cannot afford the technical cost of verifiable data may be excluded from procurement platforms, finance, and services. When participation demands proof, absence of proof becomes guilt by default. In design, verifiable inclusion risks becoming digitally polite exclusion.
The limits of audit as morality
Auditability is a technical virtue, not a moral one. A perfectly recorded genocide is still genocide. History shows the capacity for atrocious acts executed with full documentation and audit trails. Proof systems can tell us what happened, but not whether it should have happened. The deeper social question is moral legitimacy, not procedural compliance. The problem: moral legitimacy rarely fits into JSON. Fairness requires context, narrative, and interpretation. A machine agent may sign its outputs flawlessly—but it may enforce unjust policies perfectly. We must preserve room for moral friction.
The cost of continuous compliance
Proponents celebrate “real-time compliance” as efficiency yet ignore its cost. Continuous proof generation consumes compute, bandwidth, telemetry, logs, indexing. That overhead becomes the barrier to entry for smaller players. Over time, only large institutions with amortised infrastructure can afford full compliance, leading to centralisation by affordability. What begins as a promise of decentralised trust may become a moat of compliance advantage.
Privacy: the un-payable debt
Advances in verifiable credentials often promise privacy via zero-knowledge proofs or selective disclosure. (Internet Policy Review) But privacy is more than concealment—it is the right to opacity, to exist without being measured. A world where every action is logged and every credential traceable—even anonymously—is still a world of surveillance, just disguised as compliance. Even if the content is hidden, the metadata persists. When everything is auditable, the chilling effect remains. Transparency becomes a tool of control.
Autonomy versus accountability
The agentic-AI movement treats autonomy and accountability as compatible when they are fundamentally in tension. To be truly autonomous is to act unpredictably; to be fully accountable is to act predictably and traceably. Proofware promises both—and might deliver neither. Agents too constrained by credential checks lose autonomy; agents free of constraint lose accountability. Sometimes the most responsible act is the one done without prior authorization because the situation demands it. A culture obsessed with pre-verified action may punish precisely the initiative it claims to enable.
The ecology of uncertainty
Trust in human societies isn’t built purely on proof; it’s built on forgiveness under uncertainty. We trust because we cannot verify everything. We trust because we decide to trust. Rituals of handshake, reputation, narrative, and institutional familiarity compensate for the fact that reality is messy. Proofware replaces the logic of trust with the logic of calculation. We trade faith for frictionless risk management—and in doing so we may lose the social glue that lets communities survive mistakes.
The epistemic trap
Much of the proof-infrastructure argument presumes that the problem of misinformation is one of data integrity. But many instances of mis- or dis-information thrive not because of false data but because of interpretive bias, manipulative framing, or selective context. A signed credential doesn’t fix narrative distortion. A verifiable invoice doesn’t fix abusive business terms. Proof of origin is not proof of honesty. The deeper danger: verification can create complacency. Once we believe something is “trusted” because it passes validation, we exempt ourselves from judgment.
The governance paradox
As more governance logic migrates into code, human governance atrophies. Institutions that once deliberated policy now maintain software. The bureaucrat becomes a DevOps engineer. In theory this seems efficient, in practice it replaces deliberation with configuration management. Democracy’s strength is in messy negotiation, contestation, compromise. A perfectly automated governance stack may appear efficient—but it would be a tyranny of clarity: everything provable, nothing debatable.
The geopolitics of proofs
Verifiable data infrastructures are not neutral. Whoever sets the credential schema controls the language of trust. If India, the EU or China exports its standard for verifiable credentials and directories, other nations must adapt or remain unverifiable. The language of interoperability can become a new data colonialism—standardization as soft power. A decentralised directory becomes a centre of influence. The purportedly open stack becomes a new form of regulatory imperialism. (Cambridge University Press & Assessment)
The entropy of evidence
Proof systems assume permanence: signed events, immutable logs, everlasting keys. In reality, cryptography ages, keys expire, algorithms get broken, logs rot. Maintaining verifiable chains across decades is non-trivial. The more we depend on proof, the more fragile our system becomes. In time, future historians might find themselves surrounded by perfectly signed but unreadable evidence. Integrity without interpretability is archaeology, not governance.
The human counter-proposal
Instead of designing for perfect proof, design for graceful doubt. Accept that systems will fail, agents will act imperfectly, and humans must remain central. Allow systems to tolerate uncertainty and humans to intervene with narrative, empathy, and exception. Hybrid governance should not mean “governance without humans” but “governance with human-machine partnership.” Let agents generate receipts—but let humans decide what forgiveness looks like.
The economic asymmetry
The proof-infrastructure model imagines a flat ecosystem, but real economies are deeply asymmetric. Large organisations can amortize the cost of verification; small ones can’t. Micro-enterprises that cannot afford the telemetry and credentialing architecture will face exclusion from high-trust markets. Compliance becomes a competitive moat. What begins as democratization of trust may evolve into feudalism of compliance.
The environmental cost
Behind every cryptographic signature, ledger entry, vector store audit, lie real environmental costs. Billions of attestation operations across global supply chains consume energy and compute. In the race for verifiability, we may replicate the kind of resource inefficiency we once criticised in blockchain networks. Integrity is valuable, but not infinite; at planetary scale, even proofs must justify their footprint. (arXiv)
Reclaiming narrative trust
Humans build trust not just on data but on story: we tell each other we trust because we believe someone, even if we cannot verify them. Fully audited worlds erase the need for story and replace it with data lineage. Yet meaning is not derived from provenance—it is derived from interpretation, empathy and shared history. The archive cannot love us back. To remain human in a proof-rich world, we must keep spaces where stories outrank signatures.
The case for friction
Efficiency is not a universal virtue. Friction slows contagion—whether biological, informational, or financial. We design frictionless, verifiable transactions assuming every obstacle is waste. But some obstacles are ethical speed bumps—moments that force reconsideration. Friction is society’s circuit-breaker. If agentic governance eliminates friction entirely, progress may accelerate into catastrophe.
The alternative vision: resilient trust
A more humane alternative is resilient trust, not perfect trust. Resilient systems assume errors, tolerate small failures, recover from large ones. They use verification as seasoning, not diet. Policies invite discretion, not prohibition. Audits focus on patterns, not punishments. Governance adapts rather than enforces. In resilient-trust architectures: proof is auxiliary, not central. Agents are accountable—but their human partners remain responsible.
Conclusion
The ambition of proofware is noble: creating a world where truth travels with data, where accountability scales with autonomy. The danger lies in substituting conscience with cryptography. Trust is not a variable to optimize; it is a cultural rhythm. No checksum can validate sincerity and no attestation can substitute empathy. The accountable machine may commit fewer errors, but the fallible human still learns more from them. Perhaps the greatest lesson of the twenty-first century will not be about intelligence, but responsibility. Proof will matter—but so will humility.
Further Reading & References
1. Governance and Bureaucracy in the Algorithmic Age
David Graeber, The Utopia of Rules (2015).
Yiyang Mei & Michael J. Broyde, Reclaiming Constitutional Authority of Algorithmic Power, arXiv (2025). (arXiv)
Bogdana Rakova & Roel Dobbe, Algorithms as Social-Ecological-Technological Systems: an Environmental Justice Lens on Algorithmic Audits, arXiv (2023). (arXiv)
2. Ethics of Proof and Accountability
Onora O’Neill, A Question of Trust (2002).
Frida Orlando & Sarah O’Brien, “The Human Rights Challenges of Digital ID,” Salzburg Global (2025). (Salzburg Global)
“Transparency and Algorithmic Governance,” Cary Coglianese, Univ. of Penn… (2022) (Penn Carey Law Scholarship Repository)
3. Sociotechnical Critiques of Data Infrastructures
Geoffrey C. Bowker & Susan Leigh Star, Sorting Things Out (1999).
A. Giannopoulou et al., “Digital Identity Infrastructures: a Critical Approach of Self-Sovereign Identity Infrastructures,” PMC (2023). (PMC)
Lynn Ulbricht & Christian Katzenbach, “Algorithmic Governance,” SSRN (2024). (SSRN)
4. Complexity, Ambiguity and the Limits of Verifiability
James C. Scott, Seeing Like a State (1998).
Nassim Nicholas Taleb, Antifragile (2012).
“The Quantified Body: Identity, Empowerment, and Control in Smart Wearables,” Maijunxian Wang (2025). (arXiv)
5. Democracy, Deliberation and Algorithmic Governance
Shannon Vallor, Technology and the Virtues (2016).
Ngozi Okidegbe, “The Outsiders of Algorithmic Governance,” SSRN (2023). (SSRN)
“Legitimacy of Algorithmic Decision-Making: Six Threats and the …” (2022). (OUP Academic)
6. Digital Identity, Power and Inclusion
“Lessons from National Digital ID Systems for Privacy, Security and Trust in the AI Age” (2025). (Tech Policy Press)
“Trustworthy Digital Identities Can Set the Standards for …” Atlantic Council (2025). (Atlantic Council)
“The Digital Identity Accountability Gap,” New Design Congress (2025). (New Design Congress)
7. On Trust, Ambiguity and Human Judgment
Anthony Giddens, The Consequences of Modernity (1990).
Zygmunt Bauman, Postmodern Ethics (1993).
Minna Ruckenstein, The Feel of Algorithms (2023). (Wikipedia)


