Consent Is Not a Data Structure
Why machine-readable consent records solve the wrong problem unless we build the missing execution layer
There is a familiar pattern in digital governance work. Something fails—socially, legally, institutionally—and we respond by formalizing its representation. We define schemas, vocabularies, metadata fields, and interoperability hooks. We do this not because representation is unimportant, but because representation is tractable. It fits the tools we know how to build. It produces diagrams. It ships standards.
Consent has followed this path with remarkable consistency.
The recent paper by Pandit, Lindquist, and Krog—”Implementing ISO/IEC TS 27560:2023 Consent Records and Receipts for GDPR and DGA” is an instance of this instinct. It is careful, technically competent, and well-intentioned. It recognizes that consent is not merely a checkbox, and that demonstrability matters. It aims to move consent from PDFs and opaque logs into machine-readable, interoperable records that can be exchanged, verified, and audited using the Data Privacy Vocabulary (DPV) as semantic glue.
That is a necessary step. It is also nowhere near sufficient.
This essay argues that the core failure of consent systems today is not the absence of structured records. It is the absence of operational control. Consent fails not because we cannot record it, but because we cannot execute it faithfully over time, across actors, and against incentives that actively undermine it.
ISO-27560 plus DPV gives us a vocabulary and a container. What it does not give us is a system. Treating consent as a data structure without treating it as a live governance process risks producing better paperwork rather than better outcomes.
What follows is a reframing of Pandit et al.’s contribution, where that contribution sits, what assumptions it makes, and what must be built around it if machine-readable consent is to become more than compliance theatre.
What the Paper Actually Does—and Why That Matters
At its core, the paper performs three moves that deserve recognition for their technical precision.
First, it positions ISO/IEC TS 27560:2023 as a neutral, implementation-agnostic structure for recording consent events. The paper is explicit that ISO-27560 does not define consent validity. It defines how consent information may be recorded, linked, and evidenced. This is an important clarification, because many consent implementations quietly smuggle normative claims into their data models. The authors avoid this trap, presenting the standard as infrastructure rather than policy.
Second, it maps the information elements of ISO-27560 to GDPR requirements, especially around demonstrability, transparency, and lifecycle management. This mapping is not trivial. GDPR’s consent obligations are distributed across articles, recitals, and regulatory practice. Article 7 requires controllers to demonstrate that consent was obtained. Articles 13 and 14 mandate transparency about purposes and processing. Recital 42 insists on freely given, specific, informed, and unambiguous consent. Aligning a record structure with these distributed obligations is useful for anyone attempting systematic compliance.
Third, it proposes the Data Privacy Vocabulary as the semantic backbone for expressing purposes, personal data categories, processing operations, parties, and legal bases in a machine-interpretable way. DPV is presented as the glue that allows consent records to be interoperable, extensible, and compatible with downstream policy reasoning. The paper even explores how this infrastructure could support the EU Data Governance Act’s vision of machine-readable consent forms enabling data altruism and intermediary services.
Taken together, the paper’s contribution is best described as representational infrastructure. It tells us how to describe consent in a structured, interoperable, and semantically rich way.
That is valuable. But it also defines the limits of the work. Representation is not execution. Description is not enforcement. And auditability is not governance.
The Central Assumption: Better Records Lead to Better Consent
The paper rests on a quiet but powerful assumption: that moving consent into machine-readable, standardized records materially improves consent outcomes.
This assumption deserves scrutiny.
Machine-readable consent records unquestionably improve certain things. They make audits easier. They enable automation of compliance reporting. They reduce ambiguity in how consent was logged. They allow systems to reason over consent metadata rather than scraping free-text logs. For privacy engineers drowning in ad hoc implementations, ISO-27560 offers a welcome order.
What they do not do, by default, is make consent more valid, more meaningful, or more protective of individuals.
Most consent failures identified by regulators do not arise from missing records. They arise from manipulative interfaces, bundled purposes, asymmetries of power, deceptive defaults, consent walls, misleading notices, or the absence of real choice. The Irish Data Protection Commission’s €225 million fine against WhatsApp in 2021 wasn’t about record-keeping failures - it was about forcing consent through service access requirements. The €746 million fine against Amazon in 2021 concerned dark patterns in cookie consent, not database schemas.
None of these failures are solved by a richer data model.
A dark pattern that coerces consent can generate a perfectly structured, DPV-aligned ISO-27560 record. The interface might use pre-checked boxes, bury the “reject” option, or present false equivalencies between necessary and optional processing. The user might click through under duress, confusion, or resignation. From the perspective of the record, everything looks compliant. The timestamp is there. The purpose is labeled. The legal basis is specified. The cryptographic signature is valid.
From the perspective of the individual, nothing is.
The paper acknowledges this distinction in passing, noting that a consent record is not sufficient to determine consent validity. But it does not grapple with the operational consequence of that admission. If the record does not encode validity, then validity must be assessed elsewhere. Where is that assessment defined? Who performs it? On what evidence? With what authority?
Without an executable notion of validity, machine-readable consent risks becoming a high-fidelity log of low-integrity behavior.
Semantic Interoperability Is Not Free
DPV plays a central role in the paper’s architecture. It is the vocabulary through which purposes, data categories, processing operations, and legal bases are expressed. The paper treats DPV as a shared semantic layer that enables interoperability across systems, organizations, and jurisdictions.
This is both reasonable and wildly optimistic.
Semantic interoperability is not achieved by publishing a vocabulary. It is achieved by governing how that vocabulary is used, constrained, profiled, and tested. Without governance, vocabularies drift. Organizations interpret terms differently. Purposes proliferate. Compatibility becomes subjective.
Consider the humble “purpose” field. In DPV, purposes can range from the abstract (”service provision”) to the specific (”recommending products based on purchase history”). The paper demonstrates how ISO-27560 can accommodate this flexibility. But flexibility is a double-edged sword.
Two organizations can both claim DPV compatibility while being semantically incomparable in practice. One defines “marketing” broadly to include any communication with customers. Another defines it narrowly to exclude transactional emails. One bundles analytics into “service improvement.” Another separates them. When a data intermediary tries to assess whether consent for “marketing” at Organization A is compatible with “promotional communications” at Organization B, what does the machine do?
String comparison fails. Taxonomy lookup fails without controlled hierarchies. Subsumption reasoning fails without explicit relationships. Context collapses.
The paper gestures at profiles and schemas, acknowledging that ISO-27560 allows flexibility and extensibility. That flexibility enables adaptation to context, but it also creates fragmentation. If consent records are to be exchanged, reused, or evaluated across organizational boundaries—as the paper suggests, particularly in the context of data altruism and intermediaries—then purpose semantics must be more than descriptive labels. They must support reasoning.
That requires controlled taxonomies, compatibility rules, versioning discipline, and conformance testing. It requires governance infrastructure: bodies that define canonical purposes, adjudicate disputes about equivalence, and enforce semantic discipline. It requires tooling: validators that check conformance, reasoners that evaluate compatibility, and registries that track versions and extensions.
None of this is specified in the paper. All of it is operationally hard. And all of it determines whether semantic interoperability is real or aspirational.
Absent this governance layer, DPV risks becoming a polite fiction: a shared language that everyone speaks slightly differently.
Authority, Trust, and the Myth of the “Authoritative Receipt”
The paper introduces the idea that consent receipts, especially when cryptographically protected, can serve as authoritative records. It discusses signatures, tamper resistance, and the potential use of decentralized identifiers and verifiable credentials. The language is careful, acknowledging that trust frameworks and stakeholder agreements are needed.
But the conceptual gap between cryptographic authenticity and institutional authority is not bridged.
Authority in distributed systems does not emerge from cryptography alone. It emerges from governance. Who is authorized to issue a receipt? Whose signature is trusted? What happens when records conflict? How are disputes resolved?
Consider a simple scenario that exposes the problem. A user holds a consent receipt in a wallet, cryptographically signed by a controller. The controller holds a consent record in its logs, also cryptographically protected. A processor holds a derived record reflecting downstream use, obtained through a data sharing agreement. The user withdraws consent via the wallet interface. The wallet updates immediately. The controller receives the withdrawal but updates its systems late due to a deployment freeze. The processor does not update at all because the data sharing agreement doesn’t mandate real-time propagation.
Which record is authoritative? On what basis? Who arbitrates?
The wallet says: “My record is authoritative because it’s in the user’s possession, reflecting their explicit action.” The controller says: “My record is authoritative because I’m the data controller under GDPR, and only I can legally determine what processing occurs.” The processor says: “My record is authoritative for the processing I perform, which was lawful at the time data was received.”
All three claims have some validity. All three records are cryptographically authentic. None definitively resolves the dispute.
The paper hints at certificates, trust anchors, and stakeholder agreement, but it does not define a trust framework. What are the roles? What are the responsibilities? What are the liabilities? Who operates the trust infrastructure? Who pays for it? What happens when the infrastructure fails?
Without such a framework, “authoritative receipt” is a slogan, not a property. It’s marketing for a coordination problem dressed in cryptographic clothing.
Decentralized identifiers and verifiable credentials can strengthen provenance and non-repudiation. They are excellent technologies for proving “this entity made this claim at this time.” They do not, by themselves, solve authority, revocation propagation, or institutional accountability. Treating them as a trust panacea risks repeating mistakes already made in identity systems, where cryptographic sophistication obscured governance voids.
Consent Is Not an Event; It Is a Lifecycle
ISO-27560 includes support for consent lifecycle events: given, withdrawn, expired, updated. The paper highlights this as a strength, and it is one. Capturing the temporal dimension of consent matters.
However, lifecycle support in a data structure is not the same as lifecycle control in a system.
In real systems, consent withdrawal is the hardest operation. It must propagate across processors, sub-processors, caches, backups, analytics systems, and machine learning pipelines. It must do so “without undue delay,” a phrase that sounds benign until one attempts to operationalize it across distributed infrastructure.
What is “undue” when processing happens at the edge, in data warehouses, in model training runs, and in third-party integrations? Is it seconds? Hours? Days? Does it depend on the purpose? The sensitivity? The technical architecture?
The paper does not address propagation semantics. It does not define service-level expectations, acknowledgement mechanisms, or enforcement hooks. It treats lifecycle events as updates to records, not as triggers for coordinated action.
This is a critical gap. Consent that cannot be enforced across the data plane is symbolic consent. It may satisfy record-keeping obligations while failing substantive ones. The user gets a receipt that says “withdrawn,” but the processing continues because the systems were never designed to stop.
Versioning introduces further complexity. Notices change. Purposes evolve. Processors are added. At what point does an update invalidate prior consent? When is re-consent required? How are users notified? How are records reconciled?
Consider a consent record from January. In March, the controller adds a new processor. In May, it adds a new purpose. In July, it updates the retention period. Are these compatible changes? Do they require fresh consent? Can they be handled as amendments to the existing record, or do they constitute new processing activities?
GDPR provides principles: purpose limitation, data minimization, fairness. It does not provide algorithms. ISO-27560 provides fields for version tracking. It does not provide update semantics.
None of these questions are answered by a schema. All of them determine whether consent governance actually works.
Purpose Compatibility Is the Real Problem - and It Is Barely Addressed
One of the paper’s more ambitious claims is that machine-readable consent can support data reuse decisions, particularly in the context of the EU Data Governance Act and data intermediaries. It suggests that purposes expressed via DPV, combined with policy languages like ODRL, can be used to assess compatibility between original consent and proposed reuse.
This is the right problem to focus on. It is also the hardest.
Purpose compatibility is not a string comparison. It is a contextual, normative judgment. It depends on scope, granularity, expectations, safeguards, and power relations. Encoding it as a machine-evaluated rule set requires more than vocabulary alignment.
The Data Governance Act envisions data altruism organizations and data intermediary services facilitating data reuse for research, statistics, and public interest purposes. It requires that data subjects can give consent for specific purposes through machine-readable formats. The paper positions ISO-27560 and DPV as enabling infrastructure for this vision.
But consider what purpose compatibility requires in practice. A researcher wants to reuse health data originally collected for clinical care. The original consent was for “improving patient outcomes in oncology treatment.” The research purpose is “studying long-term effects of chemotherapy on cardiovascular health.” Are these compatible?
Medically, they’re related. Legally, they might be distinct processing activities. Contextually, it depends on patient expectations, the sensitivity of cardiovascular data, and the safeguards in the research protocol. Ethically, it depends on whether patients would reasonably expect their cancer treatment data to be used for cardiovascular research.
To make this determination executable, one needs:
A controlled purpose taxonomy with defined relationships. Not just labels, but a hierarchy with subsumption rules. “Cardiovascular research” is-a “medical research” is-a “health research.” But is it compatible with “oncology treatment”? That requires domain expertise encoded in the taxonomy.
Rules for constraint satisfaction. Compatibility depends on conditions. If the original consent specified “only for treating my cancer,” reuse is incompatible. If it specified “for improving cancer care and research,” reuse might be compatible under certain safeguards.
Jurisdiction-specific overlays. GDPR, HIPAA, and state-level regulations have different standards for consent scope and compatibility. A purpose that’s compatible under GDPR might not be under HIPAA.
Explicit treatment of contextual integrity. Helen Nissenbaum’s framework shows that privacy depends on norms about appropriate information flows in context. Compatibility must encode these norms.
A way to evaluate “reasonable expectations.” This is inherently fuzzy but operationally crucial. Compatibility often hinges on whether the new use would surprise the data subject.
The paper acknowledges none of this complexity. It gestures at compatibility checking without specifying how it would work or who would define the rules. It mentions ODRL as a policy language but does not show how ODRL rules would capture the normative judgments required.
Without a purpose compatibility engine - not just a vocabulary but a reasoning system - machine-readable consent does not enable reuse governance. It enables reuse rationalization. Organizations can point to structured records and claim compatibility while making judgments that are opaque, inconsistent, and self-serving.
The Missing Threat Model
Consent systems operate in adversarial environments. Organizations face incentives to maximize data use. Individuals face information asymmetry and coercion. Attackers may seek to forge, replay, or correlate consent artifacts.
The paper mentions tamper protection and cryptographic signatures. It does not articulate a threat model.
This omission matters because design choices depend on threat assumptions. Are we defending against external attackers, internal misuse, or institutional drift? Are receipts public, private, or selectively disclosed? How do we prevent correlation across contexts? What happens when keys are compromised?
Consider the threat of correlation. If consent receipts contain rich metadata—purposes, data categories, controllers, timestamps—and are widely exchanged, they become surveillance artifacts. An intermediary that sees multiple receipts from the same user can build a profile: this person consented to location tracking here, health data sharing there, financial data processing elsewhere.
The receipt intended to empower the user becomes a mechanism for tracking them. This is not hypothetical. It’s a well-known problem in credential systems, where verifiable credentials that prove too much enable correlation attacks.
The paper acknowledges this tension and suggests masking or referencing sensitive information. But it does not provide guidance on where to draw the line. How much information can be elided before the receipt loses utility? How do we balance transparency for the user against privacy from intermediaries?
Without an explicit threat model, security features risk being decorative rather than defensive. Signatures prove integrity, but against whom? Hash chains prove non-repudiation, but for what purpose? Decentralized identifiers avoid central authorities, but do they avoid surveillance?
Data Minimization Versus Evidentiary Richness
Consent receipts inevitably contain sensitive information. They link individuals to purposes, data categories, and controllers. Stored centrally or in wallets, they can become surveillance artifacts.
The paper acknowledges this tension but does not resolve it. It suggests that receipts might mask or reference information instead of including it directly, but it does not provide criteria for making these trade-offs.
This is not a minor design detail. It is a structural trade-off at the heart of the system. Receipts that are too thin fail to inform users or support disputes. A receipt that says “you consented to processing by Entity X” without specifying purposes or categories leaves the user in the dark. When they try to challenge the processing, they lack evidence.
Receipts that are too rich create new privacy risks. A receipt that specifies “location data used for targeted advertising by seventeen processors in twelve jurisdictions” is informative but also a detailed map of surveillance infrastructure. If that receipt is stored in a wallet on the user’s device, it’s a treasure trove for device compromise. If it’s stored in a cloud service, it’s a target for data breaches. If it’s exchanged with intermediaries, it enables profiling.
The design space is genuinely constrained. But navigating it requires normative decisions about what a consent receipt is for. Is it primarily for the individual? For regulators? For controllers? For intermediaries?
The paper does not take a position, and without one, implementers will default to what is easiest: maximalist receipts that include everything, stored insecurely, and exchanged promiscuously. Or minimalist receipts that document nothing and protect no one.
What Is Missing: The Execution Layer
Taken together, these gaps point to a broader absence. The paper describes a data model, not a system. To make machine-readable consent meaningful, we need an execution layer that turns records into control.
Such a system would include at least the following capabilities, none of which are addressed by ISO-27560 or DPV alone.
Schema governance and conformance tooling. Sector-specific profiles, controlled vocabularies, versioning rules, and automated validation are essential. Interoperability does not emerge from optionality. Someone must define what “marketing” means in healthcare versus retail versus finance. Someone must maintain compatibility mappings when purposes evolve. Someone must enforce conformance when organizations extend the schema.
This requires governance bodies, not just technical specifications. It requires investment in registries, validators, and test suites. It requires incentives for adoption and penalties for deviation.
A consent decisioning engine. Consent records must be evaluated against concrete requests in real time. That requires policy evaluation, purpose compatibility reasoning, and jurisdictional logic. When a processor receives a data access request, the system must answer: Is there valid consent? For this purpose? From this user? Under these conditions? In this jurisdiction?
This is not a database query. It’s a reasoning task that depends on temporal logic (is consent still valid?), compatibility rules (does this purpose subsume that one?), and contextual factors (does the safeguard level match consent conditions?).
Evidence capture. Validity cannot be inferred from records alone. Systems must capture evidence of how consent was obtained: interface flows, choice symmetry, accessibility accommodations, and absence of coercion. This evidence must be cryptographically bound to the record.
Imagine a consent record that includes not just “user clicked accept” but “user was presented with unbundled choices, spent 47 seconds reviewing the notice, modified two settings, and confirmed.” That’s evidence that can support validity claims. Or evidence of “user clicked accept after 1.2 seconds on a mobile device with a 4-inch screen where the reject button was 40% smaller” that undermines them.
Capturing this evidence requires instrumentation in user interfaces, telemetry systems, and consent management platforms. It requires standards for what constitutes meaningful evidence. And it requires privacy-preserving mechanisms so evidence collection doesn’t become surveillance.
Propagation and enforcement rails. Withdrawal and updates must trigger coordinated action across the data plane. Acknowledgements must be attestable. Failures must be visible.
When a user withdraws consent, that action must cascade through every system holding their data. The consent management platform must notify all processors. Each processor must acknowledge receipt and execution. If execution fails—because a system is down, a database is locked, or a pipeline is running—that failure must be logged and surfaced.
This requires messaging infrastructure, workflow orchestration, and monitoring. It requires service-level agreements that define “without undue delay” operationally. It requires penalties for non-compliance that are enforceable and proportionate.
A user-facing control plane. Individuals need a coherent view of their consent relationships, their blast radius, and their options. Dashboards are not ornaments; they are governance interfaces.
A meaningful control plane shows: What have I consented to? Where is my data? Who has access? What are they doing with it? How can I change this? The dashboard must aggregate consent records across controllers, surface anomalies (why does this company have consent I don’t remember giving?), and enable action (withdraw everything related to marketing, immediately).
Building this requires data portability mechanisms, identity infrastructure, and user experience design that makes complexity comprehensible. It’s harder than any single technical component because it must work for actual humans with limited time and cognitive resources.
Dispute and redress mechanisms. Consent records should support challenges, corrections, and accountability. Without recourse, consent is declarative rather than protective.
When a user believes their consent was invalid, coerced, or exceeded, where do they go? A consent receipt should enable them to file a complaint with a data protection authority, trigger an internal review, or seek remedy through alternative dispute resolution. The record provides evidence. The system must provide process.
None of this is incompatible with ISO-27560 or DPV. All of it is orthogonal to them. And all of it determines whether machine-readable consent improves reality or merely describes it.
The Strategic Risk: Compliance Theatre with Better JSON
There is a danger in stopping at representation. Organizations can invest heavily in structured consent records, semantic vocabularies, and interoperability claims, while leaving the underlying power dynamics untouched.
The result is compliance theater. Consent becomes legible to machines and auditors, but not meaningful to people. Withdrawal becomes a field update rather than a real interruption of processing. Purpose limitation becomes a labeling exercise rather than a constraint.
This is not a hypothetical risk. We have seen it before in identity, security, and risk management. When control systems are reduced to documentation systems, incentives fill the gap.
In identity systems, we built elaborate credential schemes without addressing the power asymmetries between issuers and subjects. The result is surveillance infrastructure with better metadata.
In security, we generated compliance reports while ignoring fundamental vulnerabilities. SOC 2 reports proliferate while breaches accelerate.
In risk management, we modelled risk distributions while ignoring systemic fragility. VaR calculations looked sophisticated until they didn’t.
In each case, the documentation became a substitute for the thing itself. Representation crowded out reality.
Consent risks the same fate. Machine-readable records can produce an illusion of control while actual control remains absent. Organizations can demonstrate they have structured records, maintained metadata, and issued receipts—all while manipulating interfaces, coercing users, and ignoring withdrawals.
Regulators can audit schemas and check conformance—all while failing to assess whether consent is actually valid, meaningful, or protective.
The paper’s contribution should be understood in this light. It is a necessary component of a larger architecture, not the architecture itself. Mistaking the component for the system is dangerous.
A Different Framing: Consent as Live Governance
If we take consent seriously, we must stop treating it as a static artifact and start treating it as live governance.
That means asking different questions. Not “How do we record consent?” but “How do we ensure consent continues to be respected?” Not “Is there a receipt?” but “Can the individual meaningfully change outcomes?” Not “Is it machine-readable?” but “Is it enforceable?”
Live governance means consent is not a transaction but a relationship. It’s not a snapshot but a process. It’s not a document but a system of control that persists, adapts, and responds.
In this framing, a consent record is not the goal. It’s an artifact of governance, not the governance itself. The record matters because it enables enforcement, dispute, and accountability. But without enforcement mechanisms, dispute processes, and accountability structures, the record is decoration.
ISO-27560 and DPV help with representation. They do not answer these questions. That is not a flaw. It is a boundary. The danger lies in mistaking the boundary for the solution.
What Good Would Look Like
Imagine a consent system built for live governance. What would it look like?
It would start with validity at the interface. Before any consent is recorded, the system would verify that the interface meets minimum standards: unbundled purposes, symmetric choice architecture, clear language, accessible design, no dark patterns. This verification would be automated where possible and audited where automation fails. Evidence of the interface state would be cryptographically bound to the consent record.
It would include a decisioning layer that evaluates every data access request against the current consent state. Purpose compatibility would be assessed through a reasoning engine backed by controlled taxonomies and jurisdictional overlays. The engine would flag edge cases for human review but handle routine decisions automatically.
It would implement propagation guarantees. Consent changes would trigger cascading updates with acknowledgement requirements and timeout monitoring. Failures would surface immediately. Service-level agreements would define propagation windows based on processing sensitivity and technical constraints.
It would provide a user-facing control plane that aggregates consent across relationships, surfaces anomalies, and enables action. The interface would be simple enough for non-experts but detailed enough for power users. It would show a blast radius: if I withdraw consent here, what stops?
It would embed dispute mechanisms directly in the infrastructure. Users could challenge consent validity, request evidence, or file complaints without leaving the system. Records would be automatically transmitted to relevant authorities. Organizations would face defined windows for response.
It would operate within a governance framework that defines roles, responsibilities, and liabilities. Trust would emerge not from cryptography alone but from institutional arrangements backed by legal and economic consequences.
None of this exists today. Building it would require coordination across stakeholders: standards bodies, technology vendors, regulators, civil society, and affected communities. It would require investment in infrastructure that has no obvious business model. It would require legal reforms that create space for experimentation while maintaining protection.
It is hard. That’s why we build schemas instead.
The Open Items
For researchers and engineers who want to move beyond representation, here is what needs work:
Purpose compatibility reasoning. We need formalized frameworks for assessing purpose compatibility that capture domain semantics, contextual integrity, and reasonable expectations. This is an AI-hard problem requiring knowledge representation, reasoning under uncertainty, and integration of normative judgments.
Consent validity evidence. We need standards for what constitutes evidence of valid consent at the interface level, and mechanisms for capturing this evidence in privacy-preserving ways. This requires advances in secure telemetry, differential privacy for consent analytics, and formal methods for interface verification.
Propagation semantics and enforcement. We need protocols for reliably propagating consent changes across distributed systems with verifiable acknowledgement and timeout handling. This requires distributed systems research, formal verification of propagation properties, and integration with existing data infrastructure.
Governance frameworks for semantic interoperability. We need institutional designs for governing vocabulary evolution, resolving semantic disputes, and enforcing conformance. This requires policy design, mechanism design, and empirical study of governance failures.
User experience for consent control. We need HCI research on how to make complex consent relationships comprehensible and controllable for diverse populations. This requires participatory design, longitudinal studies of consent fatigue, and development of interaction paradigms beyond current dashboards.
Threat models and security architectures. We need explicit threat modeling for consent systems that addresses external attacks, internal misuse, and systemic drift. This requires security research on credential correlation, receipt privacy, and resilient trust infrastructure.
Economic and legal foundations. We need business models that sustain consent infrastructure without surveillance incentives, and legal frameworks that create liability for consent failures while enabling innovation.
This is a significant volume of inter-disciplinary work that requires sustained funding and coordination. It is the work that matters.
Conclusion: The First 20 Percent
The paper by Pandit, Lindquist, and Krog does good work within its chosen scope. It clarifies how consent records can be structured, semantically enriched, and aligned with regulatory requirements. It advances the conversation beyond ad hoc logs and opaque PDFs. It provides practical guidance for implementers who need to structure consent data today.
But it addresses roughly the first 20 percent of the problem.
The remaining 80 percent lives in execution: governance, enforcement, propagation, dispute, and power. Until those layers are built, machine-readable consent will remain a sophisticated description of an unsolved problem.
Consent is not a data structure. It is a promise. And promises only matter when systems are designed to keep them.
The danger of stopping at schemas is that we declare victory while the problem persists. We point to standards and vocabularies and claim progress while users remain powerless, organizations remain unaccountable, and regulators remain unable to verify compliance meaningfully.
We produce beautiful JSON that describes surveillance rather than constrains it.
If we want consent to matter—if we want it to be more than a legal formality and a checkbox—we must build the execution layer. We must treat consent as live governance that requires infrastructure, investment, and institutional commitment.
That is harder than writing schemas. It is also what the moment demands.
The paper gives us better representations. Now we need better systems. Not someday. Not eventually. Now, before we mistake the map for the territory and the menu for the meal.
Consent is not a data structure. It’s a governance challenge dressed in technical clothing. And governance challenges require more than good schemas. They require power, process, and people willing to build systems that respect human autonomy even when it’s inconvenient.
That’s the work ahead.



Brilliant dissection of the execution gap in consent infrastructure. The point about purpose compatibility being a contextual judgment rather than string comparison really nails why these systems fail in practice. I've seen orgs roll out elaborate consent schemas that looked perfect on paper, only to find withdrawl requests stuck in manual ticketing systems for weeks. The "sophisticated description of an unsolved problem" framing is spot-on and honestly kinda depressing.