Your Agent Works for Someone Else: Platforms, States, or People?
AI agents, protocols and trust infrastructure as instruments of power, not just progress
When your agent negotiates a purchase on your behalf, who decides which merchants it considers ‘trustworthy’? When it summarizes news, whose content made it into the training data and on what terms? When it helps you draft a work email, what performance metrics is it quietly collecting? These are not implementation details. They are choices about power disguised as architecture.
When “Rewiring” Really Means “Repartitioning Power”
Every story about “AI rewiring the internet” is, whether it admits it or not, a story about power. The rhetoric tends to focus on convenience, efficiency and new capabilities: agents that navigate information, talk to APIs, move money, negotiate prices and compress whole workflows into a single prompt. Protocols like the Model Context Protocol (MCP) are cast as coordination miracles, standardising how models talk to tools and data. The open web is reframed as raw material feeding these systems. It sounds like a technical evolution, perhaps even an inevitability.
Behind these architectural diagrams sits a different question: who benefits, who pays, who gets to set the rules, and who gets locked out. The internet has never been a neutral utility. Each successive layer of protocol, platform and application has redistributed bargaining power between users, workers, firms and states. Search engines and ad networks reorganised who captured attention rents. App stores reorganised who controlled distribution. Social platforms reorganised who filtered public discourse. AI agents and their protocols are the next iteration of this reordering.
The usual “legitimacy layer” framing, which talks about trust registries, verifiable credentials and governance metadata, is necessary. It explains how identities, obligations and evidence can be embedded into the stack so that agents do not operate in a legal vacuum. Yet legitimacy itself is never purely technical. The design and control of trust infrastructure is also a political project. Who operates the registries, who can issue credentials, whose standards become normative, and whose risk model defines “acceptable practice” are all political economy questions.
This text reads the agentic internet explicitly through that lens. It treats MCP, agents, content supply chains and digital public infrastructure (DPI) not just as engineering artefacts but as instruments in a larger struggle over rent extraction, regulatory reach, labour discipline and informational control. It assumes that platforms, states, workers, publishers and civil society are not passive recipients of “innovation,” but actors with conflicting interests and asymmetric capacities.
The structure follows that assumption. We begin with a historical pass over the web → app → agent trajectory, not as a sequence of UX improvements but as a story of shifting rents. We then examine MCP as a standard that organises dependency. We look at the web-as-feedstock claim as a problem of extraction and enclosure that may hollow out mid-tier producers. We consider agents from the perspective of labour and work organisation, not just productivity. We explore agentic commerce as a battle over economic rails between platforms and DPI. We then turn to states and sovereignty: how governments might govern agents, or be governed by them. Finally, we ask what it would take to design trust infrastructure that resists capture, and conclude with a simple question: who will own the legitimacy layer of the agentic internet.
Seen that way, the conversation stops being about whether agents are exciting and becomes about what kind of political and economic settlement they are ushering in.
From Web to Apps to Agents: A History of Shifting Rents
The Web Era: Distributed Surplus, Emerging Gatekeepers
The early web often gets romanticised, but it did have one important property: rent extraction was relatively diffuse. Anyone could host a website, and while infrastructure costs were not trivial, the distance between publisher and reader was short. Search engines emerged as powerful intermediaries, but they monetised primarily through advertising markets that, at least for a time, still sent meaningful traffic back to publishers. Value was captured through attention, yes, but the link structure and open protocols made it hard for a single actor to fully enclose discovery.
Even then, the seeds of concentration were visible. Search algorithms were opaque. Advertising exchanges and tracking infrastructures began to centralise data and bargaining power. Still, the mental model of the web remained one where you, as a user, visited “places”: news sites, forums, company pages. Those places had some recognisable identity. The emerging rent seekers had to negotiate with that visibility.
The App Era: Platformisation and Distribution Rents
The app era accelerated consolidation. Mobile operating system vendors created vertically integrated stacks: hardware, OS, app store, billing, developer policies. Discovery moved from search results to app store rankings and featured lists. Payments that would previously have flowed to independent gateways were re-routed through platform billing, with commissions attached. Data collection became more tightly coupled to device identity and ecosystem accounts.
From a political economy perspective, this was a classic move: convert a relatively open distribution environment into a controlled funnel, then charge rent on anything passing through. Platforms claimed this as the cost of curation and security, and there was some truth in that. Users did gain more consistent experiences and a degree of protection from malware. At the same time, developers became dependent on the policy decisions of two or three corporate actors whose decisions were not subject to democratic oversight.
The app paradigm also reconfigured labour in subtle ways. Gig economy platforms weaponised mobile connectivity and rating systems to create precarious, tightly managed work arrangements. Content creators became dependent on recommendation engines, whose opaque changes could destroy income overnight. Again, the pattern was consistent: control the interaction channel, then exercise leverage over those who rely on it for livelihood or reach.
The Agent Era: A New Rent Ladder
AI agents and protocols such as MCP represent a further step up this ladder. The unit of interaction is no longer a website or an app, but a goal expressed in language. The path from goal to outcome is orchestrated by a combination of models, tools and protocols. The human sees only a small slice of this process. The rest is an internal choreography: retrieve documents, call APIs, compose outputs, execute transactions.
This shift creates at least three new layers at which rents can be extracted.
At the bottom sits compute and cloud layers. Training and running frontier models is highly capital intensive. Investment in specialised hardware, energy and data centre capacity gives large cloud providers structural leverage. Their pricing and service design decisions cascade upwards through the ecosystem.
Above that sit model vendors. Whoever controls the most capable models, or the most well-embedded model APIs, can charge for “cognitive capacity” the way cloud vendors charge for computation today. They can also decide which use cases are supported, which are restricted, and which are prioritised as “enterprise features.”
On top of this sits the agent and protocol layer. Operator-controlled agents embedded in productivity suites, operating systems and communication platforms act as default intermediaries. MCP and similar protocols ensure those agents can talk to many tools, but the controller of the default agent still decides which tools are surfaced, how conflicts are resolved and which economic terms are negotiated with service providers.
Finally, payments and commerce rails create another plane of dependency. An agentic commerce protocol operated by a handful of payment networks could capture significant value from transaction flows, especially if linked to integrated wallets and loyalty schemes.
Together, these form a rent ladder. Climbing that ladder requires capital, data and regulatory lobbying power. Small firms can occupy niches, but the gravitational pull lies with a few vertically integrated actors that can span multiple rungs. Unless actively countered, the agentic internet risks recasting the web’s relative openness into a far narrower field dominated by a small number of stacks whose incentives are only loosely aligned with broader social goals.
MCP and the Standardisation of Dependency
Standards as Political Technologies
Standards are never neutral. They decide what counts as “compliant,” which features are first-class, and which concerns are postponed. They embody the priorities of those who write and implement them. MCP is no exception. Framed simply, it standardises how models discover and invoke tools so that agents can interact with external systems without bespoke integration for each pair of services. This genuinely lowers friction for developers and makes multi-tool orchestration feasible.
However, there is a need to ask different questions. Who hosts the reference implementations. Who contributes most to the specification and thus shapes the roadmap. Whose threat models and risk assumptions shape security guidance. Which kinds of tools get first-class support, and which are left as “extensions.” The answers rarely point to small firms or public-interest bodies. They point to large incumbents with the resources to attend working groups, maintain code and fund foundation memberships.
Standardisation does not dissolve power asymmetries. It often hardens them. A well-designed open protocol can expand the ecosystem of complementors, but it can also provide a stable surface on which dominant actors can build higher walls.
Open Standard, Concentrated Implementation
In the case of MCP-like protocols, the most likely scenario is one where the specification is open, but the most widely used implementations are delivered as managed services by major cloud and AI platforms. Enterprises, rightly wary of operational complexity and security risks, will gravitate towards these managed offerings. They will enjoy convenience and integration, but they will also become dependent on the provider’s identity model, logging system, access control patterns and policy enforcement mechanisms.
This is not inherently malicious. It is a rational outcome of scale economics and risk aversion. Yet it changes the nature of dependency. Instead of vendors being locked into a proprietary plugin format, they are locked into a particular provider’s trust fabric. Migrating away is not just a matter of rewriting code against a different endpoint. It is a matter of rethinking the entire governance envelope: who can call what, with which approvals, and how evidence is stored and surfaced.
From a political economy standpoint, this is a quiet form of enclosure. The commons is not the protocol. The commons is the ability to integrate tools and agents without inheriting a single actor’s worldview on identity, risk and acceptable behaviour. Concentrated implementations narrow that space.
Trust Registries: Neutral Infrastructure or Gatekeeping Layer
Trust registries and governance metadata are often proposed as remedies. By registering MCP servers, tools and agents in shared directories, and by expressing their properties in verifiable credentials, the argument goes, we can achieve transparency and accountability. Agents can then choose to interact only with entities that satisfy particular criteria. Regulators and users can inspect the landscape.
This is true, but incomplete. A registry is as neutral as its governance. If the criteria for admission into a registry are controlled by a small group of powerful actors, it can become a gatekeeping instrument. Certification schemes can be designed to favour firms with the resources to undergo expensive audits. Policy tags can be framed in ways that privilege certain business models. Safety rationales can be used to exclude competitors.
The question, then, is not “should there be trust registries,” but “who designs and operates them, under whose supervision, with what rights of appeal and what options for alternative registries.” A world where every meaningful MCP tool must be listed in a handful of platform-operated registries is very different from a world with sectoral, regional and public-interest registries governed by diverse coalitions.
If trust registries are designed as multi-stakeholder public utilities with transparent schemas, clear separation between operation and policy-setting, and accessible processes for contestation, they can reduce informational asymmetries and discipline powerful actors. If they are simply branded as “trust layers” but effectively controlled by the same platforms that operate the agents, they will entrench, not challenge, existing hierarchies.
Content as Feedstock: Extraction, Enclosure and the Hollowing of the Middle
From Traffic as Currency to Invisible Consumption
The proposition that “the web becomes feedstock for AI” has a blunt implication for publishers. In the search and social eras, there was at least a partial loop between content creation and revenue. Page views could be monetised through ads or subscriptions. Search and social platforms mediated discovery, but traffic still arrived at the publisher’s property. There were many frictions and injustices, but visibility and revenue were coupled.
Agents break that coupling. When a user asks an agent to summarise a policy, compare different models, or suggest options for a purchase, the agent may retrieve content from multiple sources, process it and present a unified response without ever sending the user to the underlying pages. Even when citations are presented, few users will click through. The publisher becomes a silent input to a composite output, not a destination.
For large incumbents, this might be compensated by direct licensing agreements with model providers. Such deals are already emerging in news, entertainment and reference sectors. For tiny niche producers with loyal communities, membership models may sustain them regardless of what happens with agentic consumption. It is the mid-tier that looks vulnerable: organisations too big to live off personal patronage, too small or fragmented to negotiate as equals with AI giants.
From Extraction to Collective Bargaining Infrastructure
A political economy lens asks whether we can build infrastructure that allows content producers to bargain collectively with AI intermediaries. Verifiable content supply chains offer one such route. If publishers, especially in critical knowledge domains, attach provenance and licensing metadata to their content in standardised, machine-readable ways, model providers and agents cannot plausibly claim ignorance about rights. If registries exist which record who produces what under which terms, large-scale ingestion without consent becomes legally and politically harder to defend.
These mechanisms also create the possibility of measuring contribution. Instead of the current situation, where models are trained on opaque mixtures of sources, training and retrieval pipelines could maintain manifests that link back to specific supply chains. This would allow negotiations about compensation and governance to be grounded in data rather than speculation. It would not magically create fair arrangements, but it would at least make the bargaining table visible.
Without such infrastructure, the likely outcome is enclosure. A few large publishers sign lucrative deals, gaining privileged presence in model training and retrieval. The rest of the field sees their work used with little recognition or compensation. The knowledge commons shrinks into a narrower band of sources correlated with existing economic and political power. Agents amplify that bias by repeatedly drawing from the same subset of “trusted” sources.
Democratic Consequences of a Collapsing Middle
This hollowing of the mid-tier is not just a business problem. It has democratic implications. Much of the nuance in public discourse comes from mid-sized outlets, specialised journals, civic media projects and domain-specific blogs that sit between lone individuals and global conglomerates. They often investigate niche but important topics, challenge dominant narratives and provide context that does not fit into headline formats.
If these producers cannot find sustainable footing in an agentic ecosystem, epistemic diversity suffers. Agents trained and tuned primarily on material from large institutions will tend to reproduce those institutions’ perspectives. Minority views and critical analysis will be underrepresented in the model’s latent space. Even if technically present, they will be less likely to surface given default ranking strategies and safety filters.
Political economy cares about who has voice and who is structurally silenced. The design of content supply chains, licensing frameworks and registries is not just about respecting property rights. It is about ensuring that the future knowledge infrastructure has the breadth and friction it needs to resist capture by narrow interests. Agents are powerful amplifiers. If their diet becomes overly homogenous, so will their outputs, and in turn, so will the opinions of users who rely on them.
Labour, Agents and the Reconfiguration of Work
Agents as Instruments of Labour Discipline
The labour implications of AI are often framed as a question of automation versus augmentation. That distinction is important, but political economy asks a different one: how do agents change the balance of power between workers and employers.
Agents can automate routine tasks: drafting emails, generating reports, summarising meetings, updating tickets. They can also monitor performance in granular detail, tracking response times, output volumes and adherence to scripts. In customer support, sales operations, logistics and back-office roles, this creates a new layer of instrumentation. Managers can use agents to set, monitor and enforce norms at a level of detail previously impractical.
In such settings, agents become instruments of labour discipline. They create data that can justify tighter performance targets, higher surveillance and more rapid sanctions. If designed solely around employer incentives, they risk pushing already precarious workers into more intense and less secure arrangements. The gig economy’s dependency on rating systems provides a preview: algorithmic management reshapes work even when the underlying tasks remain similar.
Worker-Owned Agents and Professional Autonomy
There is, however, another path. Agents need not be solely employer-owned. Workers and professionals can, in principle, operate their own agents that embody their interests and codes of conduct. A doctor could deploy an agent trained on current clinical guidelines and ethical standards that assists in diagnosis and patient communication. A teacher could deploy an agent that helps design lessons in line with pedagogical values rather than engagement maximisation. A journalist could use agents that source and cross-check information against curated registries rather than virality metrics.
The key is ownership and control. If an agent is provided by the employer or platform, its objective function will subtly or explicitly align with managerial priorities: throughput, cost reduction, risk avoidance. If an agent is controlled by the worker or by a professional association, its objective can be aligned with preserving autonomy, quality and ethical standards.
Digital trust infrastructure plays a role here. Professional associations could issue credentials to agents that certify adherence to certain guidelines and practices. Trust registries could record which agents are recognised by which professional bodies. Regulators could give differential legal weight to decisions made with the assistance of accredited agents. This would create a space for worker-aligned tooling in an environment otherwise dominated by employer-supplied systems.
DPI and Labour as a First-Class Beneficiary
Digital public infrastructure is usually discussed in relation to citizens and businesses. There is no reason labour should not be a primary beneficiary. Identity systems, verifiable credentials and wallets can empower workers to carry their qualifications, employment history, safety training and contractual rights across employers. In an agentic context, they can also carry delegations to agents that act on their behalf.
Imagine a labour wallet that holds a worker’s credentials and also governs what an employer-owned agent is allowed to do with their data. Delegations could encode that certain metrics cannot be used beyond specified purposes, or that certain categories of monitoring require explicit and revocable consent. Sectoral DPI could define reference standards for fair agentic management, giving workers and unions concrete artefacts to negotiate around.
By this time we know that technological systems rarely empower labour by default. They need to be designed, regulated and negotiated into doing so. The rise of agents is an opportunity to embed such considerations early. It can also easily become another chapter in the long history of technologies being used primarily to extract more effort for less security if those considerations are absent.
Agentic Commerce and the Battle over Economic Rails
Proprietary Stacks versus Public Rails
Commerce is where the political stakes of the agentic internet become very explicit. The actors are clear: platforms, banks, payment networks, merchants, states and users. The battlefield is the set of rails over which discovery, decision and transaction flow.
One scenario is straightforward. Major AI platforms partner with payments providers to offer integrated agentic commerce. Users give agents permission to handle purchases within spending limits. Agents search, compare and check out through proprietary transaction protocols wired into specific wallets and cards. Merchants integrate because that is where the demand is. Over time, this creates end-to-end stacks where recommendation, negotiation and payment are tightly coupled under platform governance.
The other scenario involves public rails. Countries with real-time payment systems and open commerce protocols can require that agentic commerce integrate with these DPI components. Agents can still orchestrate discovery and decision, but funds move over regulated, interoperable systems rather than opaque platform wallets. Merchant identity, dispute resolution and consumer protections remain anchored in public frameworks rather than solely in private terms of service.
Incentives and Coalitions
The incentives are not symmetrical. Platforms benefit from proprietary rails because they capture transaction data, cross-sell opportunities and fee revenue. Payment networks see opportunities either way, but may prefer arrangements that minimise the bargaining power of domestic DPI systems. Large merchants could negotiate favourable terms inside closed stacks and gain an advantage over smaller competitors.
States have a more complex calculus. On one hand, using DPI means preserving monetary sovereignty, tax visibility and consumer protection mechanisms. On the other, building the technical and regulatory capacity to govern agentic commerce is non-trivial. The temptation to outsource complexity to a few global platforms is real, especially for smaller economies.
The political economy question becomes: which coalitions emerge. Do states and domestic financial institutions align to insist that agents plug into DPI as a condition of operating at scale. Do merchants and consumer groups push for interoperability and contestability. Or do platforms succeed in framing proprietary agentic rails as safer, more innovative and more “user-centric,” gradually marginalising public alternatives.
Trust Infrastructure as a Bargaining Instrument
Trust registries, again, can cut both ways. If registries of merchants, products, disputes and certifications are operated as open, common infrastructure, they give agents a way to implement user and regulatory preferences in a transparent manner. Consumers can configure agents to prioritise merchants with strong compliance records or to avoid suppliers with poor labour practices. Regulators can monitor patterns in near real time.
If these registries are folded into proprietary stacks, they become another dimension of platform control. Only merchants integrated into the platform’s identity regime gain full visibility. Criteria for inclusion are set unilaterally. Cross-platform comparison becomes hard. States are reduced to negotiating access and audit rights to data that should arguably be part of a public ledger of economic activity.
The political economy of agentic commerce is not only about who processes the payment. It is about who sits at the intersection of data, transaction and governance. DPI offers a way to keep that intersection at least partially under democratic influence. Without it, the most likely outcome is a small number of transnational agentic commerce stacks that governments can nudge and tax, but not fundamentally direct.
States, Sovereignty and the Governance of Agents
States as Strategic Actors, Not Just Regulators
States are sometimes portrayed as slow-moving antagonists to technological progress, forever “trying to catch up.” That caricature obscures the reality that governments can be highly strategic about infrastructure. Decisions about spectrum allocation, payment systems, identity schemes and trade rules have always shaped the digital economy as much as private innovation.
With agents, states face a new set of strategic choices. They can lean into global platforms, delegating much of the complexity of agent governance to corporate actors while relying on ex post regulation and fines. They can build robust DPI and insist that agents integrate with it as a condition of operating in their jurisdiction. Or they can pursue more nationalist strategies, including localised agent stacks, data residency rules and tight content controls.
Each path has political economy implications. Delegating upwards can accelerate access to capabilities but also lock a country into dependency on a few foreign firms. Building DPI demands institutional capacity and long-term investment, but can underpin domestic innovation and regulatory leverage. Fragmentation may appeal to regimes that prioritise control over openness, but can isolate domestic ecosystems and increase costs.
Trust Registries as Regulatory Capacity
Trust infrastructure can amplify or erode state capacity. Properly designed, trust registries, credential schemas and conformance frameworks give regulators powerful tools. Instead of relying solely on paper-based reporting and ex post investigations, they can access structured, machine-readable evidence about which agents operate where, under which policies and with what history of incidents. They can condition licences on integration with registries and on producing verifiable logs.
This is not fanciful. In finance, similar moves have occurred with transaction reporting, trade repositories and real-time monitoring systems. In identity and payments, DPI has already demonstrated how standards and shared infrastructure can create both commercial opportunity and regulatory visibility. Extending this approach to AI agents is conceptually consistent, even if technically complex.
At the same time, these technologies can be turned towards control. Centralised registries of agents, tools and users, coupled with fine-grained logging, create an attractive substrate for surveillance. If combined with weak legal protections and limited institutional checks, they enable states to monitor and shape digital activity in ways that undermine civil liberties.
Sovereignty, Alignment and International Coordination
The political economy challenge is to navigate between impotence and overreach. A state that declines to engage with agent governance will find itself bargaining from weakness with global platforms whose policies set the de facto rules. A state that engages only through coercive, unilateral measures may fragment its digital economy and stifle beneficial uses.
International coordination complicates and potentially relieves this tension. Shared schemas for trust registries, inter-operable credentials for agents and tools, and mutual recognition of conformance assessments can allow regulators to cooperate without ceding all authority to private actors or multilateral bodies dominated by a few states. There is precedent in areas such as aviation safety, food standards and anti-money-laundering.
Digital public infrastructure that is deliberately designed for cross-border federation can make it easier to integrate agent governance into existing international regimes. Conversely, if agentic governance crystallises entirely in standards bodies and technical alliances steered by large firms, states may find themselves implementing rules they did not meaningfully help design.
The political economy question for states is simple to articulate and hard to answer: what combination of domestic DPI, trust infrastructure, and international engagement gives them enough leverage to protect public interests without freezing out innovation and without drifting into authoritarian uses of the same tools.
Majority-Agent Traffic as Systemic Power, Not Just Systemic Risk
Agents as Agenda-Setting Intermediaries
When the majority of meaningful internet traffic comes from agents rather than humans, the institution that controls those agents does not only possess technical power. It holds agenda-setting power. The choices that agents make about which information to surface, which options to rank highly, which suppliers to favour and which sources to trust will shape markets and discourse.
In the app era, platforms could influence exposure through app store rankings and feed algorithms. Users could still, in theory, seek out alternatives by deliberately visiting websites or installing alternative apps. In an agent era, the path of least resistance is to accept whatever the default agent surfaces. Over time, this can entrench particular brands, news sources and conceptual frames.
The political economy question becomes: whose preferences define those defaults. If a handful of corporations operate the dominant general-purpose agents embedded in operating systems and productivity tools, their commercial and ideological biases will have structural impact. Even with regulatory constraints, they will exercise considerable discretion in how they interpret vague mandates such as “safety” and “quality.”
Agent Pluralism and Contestability
One way to mitigate this concentration of agenda-setting power is to promote agent pluralism. Instead of a single “super-agent” that mediates everything, users and organisations should be able to choose, configure and switch between different agents with different governance attachments. Some agents might be run by public broadcasters, others by civil society coalitions, others by professional associations or cooperatives.
Trust infrastructure is a precondition for such pluralism. Registries of agents, their funding sources, declared policies and certification status can give users and institutions a way to understand what they are delegating to. Credential formats that describe agent capabilities and alignment profiles can allow services to set differential terms. DPI can provide identity and payment substrates that make it practical for non-platform actors to operate agents at meaningful scale.
Pluralism does not eliminate power imbalances. Large platforms will still have structural advantages. However, it opens the field to contestation. Civil society organisations can point to concrete differences between agents. Users can support alternatives without being forced into technically inferior experiences. Regulators can monitor diversity rather than only concentration.
Systemic Risk and Systemic Power Together
Majority-agent traffic also introduces classic systemic risk. Coordinated failures, whether due to bugs, misaligned incentives or adversarial attacks, can cascade rapidly. If many agents rely on the same tools, models or data feeds, their errors will be correlated. Market shocks, misinformation waves or infrastructure outages can be amplified.
The same infrastructural concentration that creates systemic risk also creates systemic power. Tools that mitigate one can often mitigate the other. Observability frameworks that monitor aggregate agent behaviours can identify both emerging failures and emerging abuses of agenda-setting capacity. Requirements for diversity in models, tools and data sources can reduce the likelihood that a single platform’s bias or error dominates outcomes.
Political economy’s contribution here is to insist on seeing these as two sides of the same structure. The concentration that creates profitable rents and strategic leverage also creates fragility. Conversely, efforts to decentralise or federate agent ecosystems may reduce some efficiencies but improve resilience and democratic control.
Designing Trust Infrastructure to Resist Capture
Anti-Capture Design Principles
If we accept that trust infrastructure will be built, the question shifts from “whether” to “how.” Political economy suggests some design principles that can make capture harder and contestation easier.
Transparency by default is one. Registries and schemas should be open to inspection. The rules for inclusion, exclusion and certification should be documented and accessible. Audit trails of changes to policies and entries should be available to regulators and, where appropriate, the public.
Pluralism by design is another. Rather than aspiring to a single global registry for agents or tools, architectures should support multiple registries, perhaps sectoral or regional, that interoperate through shared schemas. This reduces the risk that capture of one registry equates to capture of the whole field. It also allows for legitimate variation in standards across domains and jurisdictions.
Exit and voice are crucial. Switching costs between agents, trust providers and registries should be kept manageable, both technically and institutionally. Users, firms and civil society should have avenues to contest entries, criteria and decisions. Those avenues should be anchored in law and governance frameworks, not left solely to private dispute mechanisms.
Finally, shared standards with diverse implementations offer a balance between coordination and resilience. Protocols and data formats can be standardised without insisting on single reference operators. Certification schemes can recognise multiple competent authorities. This approach acknowledges that some consolidation is inevitable but resists fusing every function into a single actor.
Institutional Arrangements
Design principles require institutional expression. Multi-stakeholder governance for major trust registries is one option. Boards or steering groups that include representatives from states, private firms, civil society, labour and technical communities can prevent any one bloc from unilaterally imposing its preferences. Independent oversight bodies can audit registry operations and investigate complaints.
Separation of roles is another. The entities that operate registry infrastructure need not be the same as those that define policy, perform certification or enforce sanctions. Splitting these functions can reduce conflicts of interest. For example, an industry consortium might operate the technical platform for a registry, while an independent standards body defines the schema and national regulators handle enforcement.
In some cases, it may make sense to treat critical trust infrastructure as utilities. This implies governance structures similar to those used for systemically important financial market infrastructure or core telecommunications operators. They would be subject to heightened resilience, fairness and accountability requirements in exchange for the privileges of operating a central piece of the digital economy.
The Politics of Who Builds What
Even with careful design, trust infrastructure will not emerge in a vacuum. The first movers will likely be large platforms, cloud providers and a handful of governments. Their initial choices will shape expectations and path dependencies. Retrofitting pluralism onto a fully centralised, privately controlled trust stack is difficult.
For this reason, political economy pushes us towards early engagement. Standards bodies, public institutions, professional associations and civil society need to be at the table while agent protocols, registries and credential schemas are being defined. They need to bring not only critiques but concrete proposals for how to encode obligations, rights and redress into the technical substrate.
There is no guarantee that such engagement will succeed fully. Powerful actors have ample resources to shape narratives, lobby regulators and absorb compliance costs in ways that smaller players cannot. However, without this engagement, the default outcome will be determined by the imperatives of profit, scale and geopolitical competition, with public-interest considerations arriving late and weak.
Who Owns the Legitimacy Layer
When people talk about who will “win” in the age of AI agents, they usually mean which model vendor, which cloud, which agent platform. Benchmarks, user growth and partnership announcements dominate the conversation. Political economy suggests a different focal point. The decisive struggle is over who will own and govern the legitimacy layer: the registries, credentials, policies and evidence systems that define who is trustworthy, under which conditions and according to whom.
If that layer is effectively owned by a handful of platforms, agents will operate in a universe where “trust” is defined by private policy teams and optimised for growth and risk containment as they perceive it. States will bargain at the margins through fines and negotiated settlements. Workers, publishers and users will adapt as best they can, with limited structural leverage.
If that layer is fragmented across states in a purely nationalist way, we risk a balkanised agentic internet where interoperability suffers, cross-border cooperation declines, and authoritarian tendencies find powerful new tools for control. Innovation will continue, but in an environment of duplicated efforts and high compliance overheads.
If, instead, that layer is built as a plural, federated and contestable set of infrastructures with meaningful public and civil society involvement, we have a chance to shape the agentic internet as something more than an extraction engine. Trust registries and DPI can become levers for fairer bargaining between platforms and other actors. Verifiable content and labour infrastructures can give publishers and workers more than rhetorical rights. International coordination on schemas and practices can embed minimum floors for accountability and transparency.
None of this will happen by accident. The technical architecture of agents and MCP will move forward regardless. The question is whether political and institutional imagination keeps pace. Treating trust infrastructure as an afterthought or a purely technical matter almost guarantees capture. Treating it as a political-economic project from the outset at least opens a path to a more balanced settlement.
The agentic internet will rewire how information, work and value move. It will also repartition who holds bargaining power in that system. The protocols, registries and credentials that decide who is trusted and on what basis are not implementation details. They are where politics will crystallise. The sooner we recognise that, the more agency we retain in choosing which future we step into.



Excellent. From some dealings with a major credit card digital transactions team in the EU, they are acting on the premise that initial digital wallets are an implementation upgrade of current online and phone app payment wallets (google, apple).
Rhis infrastructure is going to take years of work and education for governments, consumers and organizations who are not actually building it in 2026.