Systemic Controllers: How Hidden Levers in Digital Infrastructure Create Real-World Risk
Examining the actors, levers and registries that govern your money, identity and models.
[Note: Tim Bouma has published two excellent pieces about the mental model of “Things in Control” - find them here and here. I’d recommend that you read those along with this essay where I borrow his perspective to focus on risk, power and the way forward.]
From “Things in Control” to Concrete Maps of Power
Digital life runs on a quiet assumption: somewhere in the system, someone is in control of the thing that matters. An account. A token. A file. A credential. A record. We talk about “owning” data or “holding” crypto or “managing” identity, but most of that language is theatre. It masks the real question that Tim Bouma’s “things in control” framing forces into the open: who, exactly, can cause what to happen to which thing, under what conditions, and with what visibility to everyone else?
I take that framing and push it further stating that if we stop arguing about metaphysical “ownership” and start taking “control” seriously as the unit of analysis, what follows for architecture, governance, and regulation is not cosmetic. It is an overhaul. It changes how we draw system diagrams, how we write laws, how we evaluate risk, and how we judge whether a digital ecosystem is trustworthy at all.
The goal here is simple and ambitious at the same time. We will try to move from a mental model (“things in control” as a better way to think about digital assets) to a concrete agenda: practical design patterns, institutional roles, and supervisory questions that follow once you refuse to look away from who actually holds the controls.
What “things in control” actually changes
Viewed from a distance, “things in control” sounds like a tidy renaming exercise. Instead of talking about “my data” or “digital property”, we talk about “things” and “controllers” and “records”. Spend a bit more time with it and the framing starts to behave like a solvent. It dissolves marketing language, exposes hidden dependencies, and strips away comfortable illusions about agency in digital systems.
The core shift is from ownership metaphors to control relationships. Ownership language comes from the world of physical goods and legal titles. You own a house, a car, a book. You can exclude others. You can sell it. You can pass it on in a will. In digital environments, those intuitions break quickly. You can “own” a private key and still be at the mercy of the mobile OS vendor that can lock you out of the device. You can “own” an NFT while the contract and marketplace keep complete discretion over what happens to the associated asset. You can “own” your medical records in a patient portal while having no practical way to extract them in a usable form.
“Things in control” cuts through this by focusing on three elements:
The thing: the asset, record, entitlement, or state that matters. A balance. A claim. A credential. A pointer to some off-chain data. A configuration for an AI model.
The controller: the party that can actually cause state changes in relation to that thing. Not the party that advertises control, or claims moral authority, but the one whose actions the infrastructure will accept.
The record: the evidence that those control relationships exist and have been exercised over time. Logs, registries, ledgers, audit trails.
Once you adopt this vocabulary, a lot of current debate reads differently. Arguments about “self-sovereign identity” become questions about which actors can actually issue, revoke, or override credentials and at which layers. Disputes about crypto custody become questions about who can sign transactions, who can roll back state, and who can freeze accounts at which points in the system. Discussions about AI safety become questions about who can retrain models, override guardrails, and ship new versions that have not been linearly evaluated.
The important thing is that this framing is not neutral. It implicitly demands that systems be explainable in terms of explicit control relationships. If a platform cannot answer “who is the controller of this thing, and how does anyone else verify that?”, that is not merely a UX omission. It is a governance problem.
The control stack: individuals, applications, infrastructure
To understand who holds control, it helps to think in layers rather than isolated actors. The “control stack” in most real systems has at least three broad levels, even if marketing tries to sell you only the top one.
The first level is human control. These are the people and organizations that initiate actions, sign agreements, and supposedly “own” the accounts or assets. Individuals, companies, agencies, trustees, guardians, and boards of directors all sit here. They are the ones we tend to focus on when we talk about rights, responsibilities, and consent. When a platform says “you control your data”, this is the mental picture it tries to invoke: you, the human, flipping switches and clicking buttons.
The second level is application control. This is where wallets, apps, admin consoles, and dashboards live. At this layer, control is exercised through software that can shape, filter, or constrain what the human believes is possible. A crypto exchange interface decides which withdrawal options are available. A social network decides which privacy configurations even exist as choices. An enterprise SaaS portal decides how granular role-based access control can be in practice. Many important decisions about control are silently taken here under the heading of “product design”.
The third level is infrastructure control. Beneath the apps are the registries, ledgers, databases, HSMs, clouds, and operating systems that hold the definitive power to accept or reject state changes. The registry that records ownership of a domain name or a parcel of land sits at this layer. The smart contract that ultimately determines whether a token can be frozen or burned also belongs here. The mobile OS that can remotely lock down a device or block an app update is another example. Most users never see this layer directly, but it is where the real levers live.
What the “things in control” framing does is force us to see how these layers interact. The human may believe they are the controller, but the app might be the one selecting possible actions, and the infrastructure might allow an entirely different actor to override both. A stablecoin user “holding” a token in a wallet experiences human-level control, but the issuer can often change the rules at the contract level, while regulators retain the ability to intervene through supervisory or enforcement powers on the institutions behind everything.
This stack view also reveals where illusions are strongest. Many “self-custody” solutions actually rely heavily on application and infrastructure intermediaries. Many “non-custodial” services still have soft levers over user outcomes through their control of configurations, defaults, and upgrade paths. The point is not to condemn mediation. It is to name it and make it explainable.
Illusions of control in today’s ecosystems
Once you start asking “who controls this thing, precisely?”, current digital ecosystems look less like expressions of individual agency and more like complex, layered hierarchies of power.
Take mainstream crypto. On paper, it is a world of self-custody and permissionless transfers. In practice, the overwhelming majority of users interact through custodial exchanges, broker apps, and financial intermediaries. These entities control keys at scale, can unilaterally freeze withdrawals, can delist assets, and often hold opaque privileges in smart contracts. Users still talk about “owning” tokens, but it is infrastructure operators and compliance teams who decide what actually happens to those tokens when policy changes or regulators knock on the door.
Or consider everyday identity and login. People sign into services using social identity providers, enterprise SSO, or mobile platform accounts. They experience a smooth flow: one click, and access is granted. At the same time, those identity providers sit in a position of concentrated control. A change in their policies, a misconfigured risk engine, or a dispute over terms of service can abruptly cut off access to dozens of dependent services. The technical architecture often gives them unilateral control over whether a user can authenticate at all, regardless of that user’s contractual or legal rights elsewhere.
Public registries tell a similar story from a different angle. Land records, company registers, and civil registries all present themselves as neutral sources of truth. Yet their operators effectively control the authoritative records that anchor property rights, corporate identity, and civic status. An erroneous entry, a malicious update, or a silent rollback can have severe consequences for individuals and institutions. Nevertheless, the design of many registry systems treats control over records as an internal matter of administration rather than as a public, verifiable relationship that can be inspected, challenged, or audited by affected parties.
The pattern across these examples is consistent. Narratives emphasise user agency, markets, and choice. Technical architectures and institutional arrangements concentrate control in a small set of intermediaries that are lightly scrutinized on that dimension. The problem is not that these intermediaries exist. Complex systems need operators. The problem is that their role as controllers of critical things is rarely explicit, rarely disclosed, and rarely designed with proper governance and oversight in mind.
“Things in control” offers a way to redraw this picture. It allows us to detach our judgement from brand promises and focus on specific claims. Who can alter the state of this token? Who can revoke this credential? Who can reassign this land parcel? Who can disable this model? If the answer is “we are not sure” or “it depends on internal procedures that are not visible”, then what we have is not a trustworthy control regime. We have wishful thinking on top of hidden levers.
Designing verifiable control
If we accept that control needs to be named and made visible, the next step is design. How do we build systems where control is not only implemented but also verifiable by parties who depend on it?
Three primitives sit at the core of any serious attempt.
The first is identifiers and keys. Controllers must be represented by identifiers whose binding to real-world actors can be evaluated according to context. Those identifiers must be backed by cryptographic keys or equivalent mechanisms that can sign requests, authorize actions, and bind accountability. Without stable, verifiable identifiers for controllers, every other layer of control becomes guesswork. This does not mean every controller has to be a natural person. It means that when a controller is an institution, device, or agent, that arrangement should be explicit and inspectable.
The second is registries and ledgers. There must be places where authoritative state is recorded in a way that others can rely on. In some cases, this will be a traditional database under a clear institutional mandate. In others, it might be a distributed ledger with strong immutability guarantees. The important property is not the specific technology but the combination of integrity, traceability, and clarity about who configures and operates the registry. It is here that one encodes who is recognized as the controller of a thing, what actions have been taken, and under what rules.
The third is credentials and attestations. Control rarely exists in a vacuum. It is granted, recognized, or validated by other actors. A guardian might be appointed through a legal process. A system operator might be authorized by a regulator. A data processor might be contracted by a controller to act within careful bounds. These relationships need formal expression. Verifiable credentials, signed policy statements, and machine-readable contracts can all play a role. They turn informal claims about control into objects that can be checked.
On top of these primitives, we can sketch a few recurring patterns.
Direct control, where a person or organization holds exclusive cryptographic authority over a thing. This is often the aspiration behind self-custody models. It has the advantage of clarity but raises hard problems around key loss, coercion, and incapacity.
Delegated control, where guardians, trustees, or agents can act on behalf of others within defined scopes. This is essential for minors, people with fluctuating capacity, or large organizations with complex internal roles. It requires fine-grained expression of who may do what, for how long, under what conditions, and with which audit.
Joint control, where actions require consensus or quorum among multiple controllers. Multi-signature wallets, co-signing schemes, and split-key governance models fall in this category. They offer resilience against single-point failure or abuse but can be operationally complex.
None of these patterns is universally superior. Each introduces trade-offs among autonomy, safety, speed, and accountability. The key is to make those trade-offs explicit. A person using a self-custody wallet should understand that direct control brings both freedom and fragility. A user relying on a guardian should understand the mechanisms for oversight, revocation, and appeal. A group using joint control should know how deadlock is handled and who can break ties when needed.
Verifiable control also implies that relationships are not only enforced by code but also discoverable. It should be possible for a relying party to ask: who does this system claim is the controller of this thing, which attestations support that claim, and what history of actions have those controllers taken? That is the basis for both trust and effective supervision.
Governance fabrics: who arbitrates, supervises, and intervenes?
Technical clarity about control does not remove the need for governance. In fact, it sharpens it. Once we can name controllers and see how they are bound into infrastructure, the question “who oversees all of this?” becomes unavoidable.
Governance begins with disputes. Systems that manage identity, money, land, or models will see conflicts over control. People will claim that credentials were revoked unfairly, that assets were frozen without due process, that guardians abused their powers, or that registrars applied the wrong rule. In all these cases, someone must have the authority to examine evidence, interpret rules, and order changes in control.
At a minimum, this suggests three institutional roles.
Registrars and authorities are the entities that define the rules for recognizing control and recording state. In traditional domains, these are land offices, company registries, certification authorities, or civil registration systems. In newer digital ecosystems, they might be protocol governance groups, foundation boards, or multi-stakeholder consortia. Their key responsibility is to maintain explicit policy about how control is assigned, transferred, and revoked, and to run or contract the infrastructure that implements those rules.
Auditors and overseers are the parties that verify whether the registrars and controllers follow their own rules. They examine logs, configurations, and incident histories. They assess whether declared governance aligns with observed practice. They surface concentration of control, unmanaged conflicts of interest, and systemic vulnerabilities. In some contexts, they are part of internal compliance functions. In others, they are independent third parties.
Courts and adjudicators are the entities that can resolve disputes and issue binding decisions that override or confirm existing control arrangements. That might be a formal judicial system, an arbitration body, a regulator with adjudicative powers, or a sector-specific ombudsman. The important point is that the system has pathways for contestation and redress, and that those pathways are visible to people affected by control decisions.
When we speak of a governance fabric, we mean the structured interaction of these roles across many systems. A wallet operator that mediates control over high-stakes identity credentials should not be accountable only to its shareholders. It should sit inside a fabric where standards bodies, regulators, and user representatives all have defined roles in shaping and supervising its control regime. A stablecoin that intermediates large flows should not be governed solely by a foundation in a jurisdiction chosen for convenience. It should be embedded in a fabric that reflects the public interest in the financial stability and integrity impacts of its design.
The test of a governance fabric is simple. When a control decision goes wrong or is contested, can affected parties trace a path from the immediate interface all the way up to an institution that can examine and rectify the decision? If the answer is no, then we have an incomplete fabric, regardless of how elegant the underlying protocol might appear.
Policy and regulatory impact: custody, control, and accountability
Existing legal frameworks already grapple with questions of custody, control, and responsibility, but mostly in a world where the assets are tangible, intermediaries are clearly identified, and records are relatively centralized. Digital ecosystems blur all three. “Things in control” provides a way to realign regulatory attention with the realities of how modern systems operate.
Consider financial regulation. Laws often draw distinctions among custodians, brokers, payment institutions, and infrastructure providers. Each category carries different obligations around capital, risk management, consumer protection, and reporting. In many crypto and tokenized asset arrangements, those roles are fused or rearranged. The entity running a smart contract may be invisible, while the branded app in front of users claims to be “non-custodial”. The legal classification of such entities can become an exercise in creative interpretation instead of clear mapping to functional roles.
A control-focused approach would start by asking: who can unilaterally move client assets, freeze them, or alter contractual parameters? Whoever can do those things is exercising a form of custody or control and should be brought into an appropriate supervisory perimeter. It should not matter whether their marketing material insists they are “just providing software” or “only running a protocol”. If they hold the effective levers over things that have financial impact, they should face corresponding duties.
Similar reasoning applies to data protection. Frameworks like GDPR distinguish between controllers and processors. In practice, complex service chains and cloud architectures can make it hard to tell who is really determining purposes and means. A “things in control” lens would push regulators to ask not only who signs the data processing agreement but also who controls the technical means of collection, retention, and deletion. When a deletion request fails because some subsystem cannot comply, that subsystem is not a neutral machine. It is a place where control has been neglected or silently delegated.
AI governance is another frontier where control questions are central and currently underdeveloped. High-risk AI proposals often talk about model evaluation, transparency, and monitoring, but less about who can push new model versions into production, who can override guardrails, and who can change the training data pipeline. Knowing who has those powers is crucial for assessing both safety and accountability. A model that appears well-governed on paper but can be quietly replaced by a single engineer is not actually well-governed.
In all these domains, regulators can adopt a simple habit. During authorization, inspection, or enforcement, they can demand a control map. That map should describe, for each class of thing the system manages, who is recognized as controller at each layer, which institutions validate those relationships, and where logs of control actions are maintained. It should reveal both formal arrangements and practical realities. Once that becomes a norm, the incentives shift. It becomes harder to hide control behind contractual language and easier to compare systems on a meaningful axis.
Applying “things in control” in concrete domains
To keep this from floating at the level of principles, it helps to walk through a few sectors where control questions are both acute and currently under-analyzed.
Financial assets and stablecoins
Stablecoins and tokenized assets are often presented as crisp, programmable instruments. Behind that clarity sits a messy web of control.
On the surface, a stablecoin user sees a token in a wallet. They can send, receive, and sometimes redeem. At the infrastructure layer, however, the issuer often retains the ability to freeze addresses, blacklist tokens, alter contract parameters, or even migrate balances to new contracts. Banks and custodians hold the underlying reserves. Oracles supply price feeds and other data. Each of these actors is a controller of some aspect of the thing, yet very few users or policymakers can see the full graph.
A control map in this context would show, for a given stablecoin, which entity can:
Mint or burn tokens.
Freeze or seize specific balances.
Change redemption rules.
Alter code through upgrades.
Reconfigure risk or compliance policies.
Once that map is visible, conversations about systemic risk, investor protection, and market integrity become more grounded. It becomes clear where concentration of control introduces hazards, where multi-party control could reduce fragility, and where regulatory attention is most needed. For systemically important tokens, one could imagine mandated patterns such as multi-signature governance for contract upgrades, independent oversight of blacklisting functions, and explicit disclosure of control mechanisms in prospectuses.
Identity and access infrastructure
Digital identity systems are often sold on the promise of user-centric control. Verifiable credentials, decentralized identifiers, and wallet-based architectures are all pitched as ways to give individuals more agency over how they present themselves and to whom.
Yet the effective control map often tells a more complicated story. Wallet software decides how credentials are presented and where backups are stored. Mobile platforms control whether wallets can function at all. Credential issuers retain the power to revoke or reissue attributes. Guardians or delegates may hold power of attorney or similar instruments. In enterprise settings, administrators can disable accounts and revoke access regardless of credential status.
A “things in control” approach would push identity designers to draw that map explicitly. It would highlight questions like:
Who can revoke a credential and on what grounds?
Who can reset keys or recover access when a user loses control?
Who can prevent a wallet from operating in a given jurisdiction?
Who can correlate presentations across services through technical or policy means?
Once these questions are answered, governance arrangements can be designed honestly. If wallet operators are critical controllers, they should face obligations around transparency, uptime, recovery support, and due process for suspensions. If mobile OS vendors can effectively block identity apps, their role in the identity ecosystem should be recognized in policy debates. If credential issuers can affect people’s access to essential services through revocation, their decision-making should be subject to appeal and oversight.
Public registries: land and company records
Land and company registries demonstrate how much is already at stake in control over records. A change in a land record can alter livelihoods and wealth. A change in company control records can affect liability, regulation, and taxation. These registries are, in many jurisdictions, under the control of state agencies that are both operator and regulator.
In a narrow administrative view, these registries are just internal databases with staff workflows. In a control-conscious view, they are critical pieces of public infrastructure where control must be transparent and contestable. Every change to a land parcel’s record is an act of control over a thing whose value extends far beyond the database. Every director appointment or removal entered in a company registry is a control decision over corporate identity.
A robust control regime in this space would treat registry operators as controllers with explicit obligations. It would provide:
Tamper-evident logs of changes to records.
Clear identification of which roles within the registry can make which types of changes.
Mechanisms for affected parties to inspect, verify, and challenge records.
Independent oversight to audit both processes and outcomes.
That does not require blockchain evangelism or radical decentralization. It requires an honest admission that if people and markets rely on these registries, then control over them must be designed as a public trust function, not as a quiet administrative detail.
Data, IoT, and climate or ESG measurement
In the climate and ESG world, there is growing attention to “measurement, reporting, and verification”. Data from sensors, satellites, industrial systems, and supply chains flows into dashboards that anchor investment decisions, policy judgements, and public narratives. The assumption behind many of these systems is that once data is collected, it speaks for itself.
“Things in control” reminds us that control over measurement devices, data pipelines, and aggregation platforms is itself a kind of power. A company that controls the sensors on its own smokestacks, the software that processes their readings, and the platform that publishes emissions data holds a dense cluster of control over a “thing” that regulators and markets care about. A verifier that signs off on those numbers adds another layer.
Mapping control here means asking:
Who configures and maintains the sensors?
Who controls the software that transforms raw readings into reported metrics?
Who can exclude or include data points in the final reports?
Who can override or suppress unfavorable results?
Without that map, “climate data” risks becoming an arena where control is exercised without scrutiny. With it, assurance markets and regulatory frameworks can impose appropriate constraints, such as independent operation of critical sensors, separation of duties between operators and verifiers, and traceable logs of adjustments to reported metrics.
Risk, failure modes, and anti-patterns
Every control regime carries risks. Some come from concentration. Others come from opacity or overconfidence in automation. Naming them explicitly is part of designing systems that can fail safely instead of catastrophically.
Concentration risk arises when a small number of institutions become controllers of a large share of important things across multiple systems. A handful of cloud providers, mobile OS vendors, custodial exchanges, or identity platforms can end up holding levers whose failure or misuse would have cross-sectoral impact. From a control perspective, these entities are not just “service providers”. They are systemic controllers whose decisions can ripple through finance, identity, communication, and commerce.
Opacity risk appears when control relationships exist but are neither documented nor visible. Many systems technically log control actions but bury those logs in proprietary formats or guarded APIs. Users and regulators see only the final outcomes, not the patterns of who did what, when, and under whose authority. This creates fertile ground for quiet misuse. It also makes it difficult to diagnose incidents or to learn from near misses.
Governance theatre happens when systems adopt forms of shared or decentralized control on paper without actually shifting power. Token votes that cannot constrain foundation decisions, advisory boards that cannot veto protocol changes, and grievance mechanisms that cannot alter outcomes are all examples. They create the impression of distributed control while leaving practical levers untouched.
On the technical side, irreversible control is an anti-pattern that deserves scrutiny. Designs that allow no path for emergency intervention, correction of clearly fraudulent actions, or restoration of control after coercion might look pure from a protocol perspective. In real societies, where people lose keys, face threats, or encounter force majeure, refusal to design reversibility is less a commitment to freedom and more a decision to abandon people to bad outcomes.
Symmetrically, unbounded emergency powers are another anti-pattern. Systems that allow a small group to override any control relationship “for safety” or “in emergencies” without clear criteria, time limits, or external oversight invite abuse. The existence of such powers should be documented, constrained, and exposed to scrutiny.
The risk management implication is that systems should not simply list general threats like “cyber attack” or “data breach”. They should maintain risk registers that connect specific threat scenarios to the control relationships they exploit. For each high-stakes thing under control, there should be clarity about:
What could go wrong if a controller acts maliciously or is compromised.
How quickly such actions could propagate through dependent systems.What detection and response mechanisms exist.
What forms of redress are available to affected parties.
Without this alignment between control maps and risk analysis, many risk management exercises remain decorative rather than operational.
A forward agenda: what to build, test, and standardize
If “things in control” is to be more than a neat conceptual lens, it needs to guide concrete work. That work will look different for architects, standards bodies, regulators, and researchers, but it can be oriented around a few simple moves.
For architects and builders, the priority is to ship systems with explicit control maps. That does not have to be grand. It can start with internal documentation that describes control relationships for each major capability. It can extend to user-facing diagrams that explain who holds which levers and how interventions work in edge cases. Over time, it can grow into machine-readable descriptions: APIs that expose control metadata, registries of controllers, and event streams that log control actions in ways that others can monitor.
Builders can also design for reversible, audited, and time-bounded control. That means thinking about how to implement undo operations for specific classes of actions, how to separate duties among controllers, how to enforce expiry on certain privileges, and how to handle disaster recovery without opening backdoors for everyday misuse. Instead of treating these as afterthoughts, they can be part of the initial design space.
Standards bodies have the opportunity to encode the “things in control” vocabulary into widely used specifications. Digital identity frameworks can include explicit models for controllers, guardians, and delegates. Financial messaging and settlement standards can include structured fields that indicate control parameters and override mechanisms. AI lifecycle standards can require documentation of who controls models, training data, and deployment channels.
Once standards adopt this mindset, they can provide reference architectures, assurance levels, and conformance criteria that are much more aligned with real-world control dynamics. Interoperability can then include not only data formats and protocols but also expectations about how control is exercised and revealed.
For policymakers and regulators, the next step is to change the questions they ask. Instead of focusing primarily on ownership, licensing, or high-level governance structures, they can insist on seeing control maps as part of authorization, supervision, and incident response. When reviewing a new product, they can ask: in this architecture, who can actually change the state of critical things, and how are those powers checked? When investigating a failure, they can ask: which controllers acted, under what authority, and with which logs?
Regulators can also encourage or mandate public registries of control for certain high-impact systems. Much as there are public lists of systemically important financial institutions, there could be public disclosure of critical controllers in payment systems, identity infrastructure, or climate data platforms. This would help markets, civil society, and other regulators understand where systemic control is concentrated.
Researchers, finally, have a wide landscape of problems to explore. They can develop methods to measure concentration of control across infrastructures, to detect mismatches between declared and observed control, and to simulate the effects of different control arrangements on resilience and fairness. They can study how control regimes intersect with power asymmetries, especially for marginalized groups, and how guardianship models can be designed to support autonomy rather than undermine it.
None of this needs to wait for perfect theory. Much can be learned by applying the “things in control” lens to existing systems today, documenting what is found, and sharing that analysis across disciplines.
From slogans to diagrams of power
The promise of the digital world has been framed for years in language of empowerment, disruption, and democratization. People are invited to “own their data”, “take control of their identity”, “be their own bank”, and “govern protocols together”. Those narratives have done their job. They have drawn attention, capital, and policy interest into the space. They have also created expectations that the underlying architectures do not always meet.
“Things in control” offers a way to reset that conversation. It does not ask us to abandon ambition. It asks us to ground ambition in clear, observable relationships. It suggests replacing vague slogans with diagrams that show who controls which things, how that control is recognized, how it is supervised, and how it can be challenged.
This is not a purely technical exercise. It is about mapping power in a form that engineers, lawyers, policymakers, and affected communities can all read. When someone claims that a system is fair, open, or user-centric, we can now ask a sharper question: what does its control map look like, and does that map match the story being told?
The stakes are high, because control over digital things increasingly translates directly into control over resources, opportunities, and narratives in the offline world. Control over identity credentials affects who can cross borders, access services, or participate in civic life. Control over tokenized assets affects wealth, liquidity, and financial stability. Control over registries affects property rights and corporate accountability. Control over models and data affects what we see, what we believe, and what we can do.
The invitation in Bouma’s framing, and the challenge for everyone working in this field, is to treat control as the backbone of our thinking, not an afterthought. If we can learn to draw honest control maps, design verifiable control regimes, and build governance fabrics that hold controllers to account, then digital infrastructure can move closer to the rhetoric that currently surrounds it.
Until then, every claim of empowerment deserves a quiet follow-up question: who really holds the controls here, and how do we know?


