When Systems Operationalize Harm
Systems Don’t Merely Fail — They Operationalize Harm
Digital systems rarely fail in ways that resemble human error. They do not make occasional mistakes or lapse in judgement. They transform errors into processes. Once a flawed assumption is encoded, it becomes a rule. Once a misclassification is embedded, it becomes a workflow. Once a bias is translated into code, it becomes a systemic force that governs at scale. What begins as a small interpretive gap becomes a structural pattern of harm because systems execute decisions repeatedly, consistently, and without hesitation.
Harm becomes operational when the architecture does more than misinterpret—it acts on the misinterpretation. A credit model that misreads financial volatility converts uncertainty into denial. A fraud model that misreads device changes converts normal variation into suspicion. A safety filter that misreads language context converts harmless speech into violation. These actions align with the system’s logic even when they diverge from human understanding. The harm is not accidental; it is the natural output of the system’s internal reasoning.
The speed and frequency of computation intensify this pattern. A system does not need to be malicious to cause damage. It only needs to be confidently wrong and widely deployed. Every individual processed by the system absorbs the consequences of its assumptions. Every incorrect rule propagates without friction. Every misaligned decision reinforces itself through feedback loops that the affected individuals cannot see or contest. What emerges is a world in which harm is not episodic but infrastructural.
This shift matters because institutions increasingly depend on digital systems to govern eligibility, access, opportunity, and freedom of movement. The more institutions rely on systems to interpret people, the more these systems define the boundaries of what is considered legitimate behaviour. When systems operationalize harm, the consequences become harder to correct because they are embedded in the logic governing everyday life. The harm becomes policy through computation, even if no policy maker intended it.
The central challenge is not malfunction but misalignment. A system can be performant—accurate, efficient, elegant—and still harmful if its underlying assumptions reduce human complexity to a model that cannot accommodate it. The point is not to accuse systems of cruelty. It is to recognise the quiet violence of misinterpretation at scale and to develop architectures that prevent error from becoming institutionalised.
The System as Actor: How Infrastructure Gains Agency
Modern digital infrastructure no longer behaves like a passive tool. It has become an active participant in governance. When a system automatically approves or denies an application, flags an account, enforces a penalty, or removes visibility, it is exercising a form of agency. This agency is constrained by code, optimisers, and data pipelines, but its effects on individuals are indistinguishable from those of institutional decisions made by humans.
This agency emerges from the convergence of three characteristics: autonomy, repeatability, and authority. Autonomy allows systems to take actions based on internal logic rather than human evaluation. Repeatability ensures that these actions are expressed uniformly across all users without contextual variation. Authority grants these actions institutional legitimacy because organisations have delegated decision-making to computational processes. Once these characteristics align, the system begins to operate as a governing force.
The problem is not that systems act, but that their agency is unaccountable. The logic governing system behaviour is often opaque even to those who operate the infrastructure. Models evolve through training data that may contain buried bias. Rulesets are authored by multiple teams with inconsistent assumptions. Risk thresholds are tuned by people who never meet the individuals affected by them. Over time, these internal rules form a distributed governance regime with no coherent mechanism for self-correction.
When infrastructure gains agency, design becomes legislation. Every architecture decision establishes constraints. Every model parameter encodes a value judgement. Every exception—or the absence of an exception—defines the boundaries of acceptable behaviour. Designers do not merely build systems; they establish institutional norms. These norms persist long after the designers have moved on because the system continues to enforce them automatically.
This produces a new kind of institutional power—one that is precise, relentless, and insulated from narrative explanation. A person can reason with a human evaluator. They cannot reason with a risk model. They cannot tell the system that their circumstances have changed. They cannot explain nuance, justify variation, or negotiate context. The system acts based on signals, not stories. And because its agency is embedded in infrastructure, the person has no appeal except to the institution that delegated authority to the system in the first place.
Recognising systems as actors is not a philosophical indulgence. It is a practical necessity. Without acknowledging their agency, institutions cannot design oversight mechanisms that treat system decisions as governance actions rather than computational outputs. Systems do not become more humane through aspiration; they become more humane through structures that constrain their authority.
Harm by Design: The Architecture of Misalignment
Misalignment in computational systems is often subtle. It begins with design decisions that optimise for efficiency, scalability, or predictability. These goals are reasonable from an engineering perspective, but they can distort the interpretive landscape when applied to human identity and behaviour. Misalignment arises when the system’s optimisation target diverges from the institution’s moral or social objectives.
Classification schemas illustrate this problem clearly. Systems need categories to function, but these categories often oversimplify. They collapse multidimensional identities into binary choices. They treat variation as noise. They assume stability where instability is normal. Once encoded, these schemas shape every input the system receives and every output it generates. The harm arises not because the system is inaccurate, but because its framing excludes crucial dimensions of human experience.
Optimisation objectives amplify the issue. A fraud model optimises for low false negatives, even if doing so raises false positives dramatically. A credit model optimises for portfolio-level stability, even if it penalises individuals whose financial patterns do not match idealised norms. A recommendation model optimises for engagement, even if engagement correlates with polarisation. The system performs exactly as designed, but the design itself contains a misaligned model of value.
Bias is another structural source of misalignment. Systems trained on historical data inherit the inequities embedded in that data. They reproduce these inequities not because they malfunction but because they function too well. Bias becomes an operational feature—not an aberration—because the system faithfully mirrors patterns learned from flawed inputs. What was socially unjust becomes computationally standardised.
These sources of misalignment converge in deployment. Once a system is integrated into institutional processes, it shapes outcomes at scale. It defines who is eligible, who is visible, who is suspicious, who is employable, who is creditworthy, and who is allowed to proceed. Each design choice carries latent consequences that become visible only when multiplied across millions of interactions. Misalignment becomes a systemic condition, not an edge case.
The tragedy of misalignment is that it is rarely malicious. It emerges from design logic that prioritises computational efficiency over human nuance. Yet the consequences are deeply human. Misalignment shapes opportunity, mobility, and autonomy. It constructs invisible barriers and enforces them silently. It transforms institutional blind spots into everyday constraints, felt most acutely by those least able to navigate the system’s assumptions.
Preventing misalignment requires designing with the expectation that systems will operationalize their assumptions fully. The goal is not to perfect prediction but to constrain harm. Systems cannot be expected to understand every context, but they can be designed to avoid hardcoding misinterpretation into governance.
Harm at Scale: Why Computation Amplifies Impact
Digital systems operate with an intensity that no human bureaucracy has ever matched. They process decisions continuously. They repeat logic without fatigue. They apply rules uniformly to populations measured in millions. At this scale, even a small design defect becomes a structural force. A threshold set slightly too low can exclude entire communities. A model trained on incomplete data can misclassify vast demographic groups for years. A risk score calibrated crudely can reshape access to finance for an entire economic segment. The magnitude of harm is not a reflection of intent but of scale.
Computational scale collapses the distinction between error and pattern. In a human system, a mistaken judgement is an isolated event that may or may not repeat. In a computational system, a mistaken judgement is a reproducible outcome that affects every person whose data passes through the pipeline. Harm becomes predictable because it is encoded in the system’s structure. The same logic that enables efficiency enables the rapid propagation of misinterpretation.
The speed with which systems act further intensifies the spread of harm. Digital infrastructure responds instantly, often without human review. Decisions that would once require deliberation are now executed in milliseconds. This velocity eliminates the natural friction that once gave institutions time to detect anomalies. When systems operate on automated decision cycles, misinterpretation propagates before anyone notices it. By the time a problem becomes visible, the harm has already become widespread.
This amplification is not only quantitative but qualitative. Computational systems can create new forms of harm that did not exist in human institutions. A human evaluator can sense when a rule is too rigid. A machine cannot. A human can interpret an anomaly as context. A machine treats anomalies as risk. A human can adjust a decision in light of nuance. A machine lacks the epistemic vocabulary for nuance. Scale transforms these limitations from quirks into systemic properties.
The result is a form of governance that is precise yet indifferent. Systems carry out their logic flawlessly even when the logic is flawed. They enforce rules consistently even when consistency produces unfairness. They repeat decisions indefinitely even when circumstances change. The harm becomes durable because scale converts every assumption into an institutional reality. The challenge is not simply that systems act at scale but that they do so with total confidence in their own logic.
Forced Legibility as Harm: When People Must Fit the Model
Digital systems demand legibility because legibility simplifies computation. They need clean inputs, consistent categories, and predictable behaviour. Anything that deviates from these requirements introduces friction. To reduce friction, systems impose expectations on the individuals they govern. These expectations may appear subtle—consistent login patterns, stable device usage, regular income, predictable movement—but they become consequential once encoded as risk signals.
Forced legibility arises when individuals must adjust their lives to satisfy a model’s assumptions. People begin modifying their behaviour not because the behaviour is risky but because the system treats deviation as risk. Someone working multiple gig jobs may find that the system cannot interpret their unpredictable income. A person balancing caregiving with freelance work may appear unstable to systems trained on salaried careers. A migrant moving between temporary accommodations may trigger location-based flags designed for residential stability. These misinterpretations exert pressure on individuals to behave in system-normal ways.
This pressure distorts autonomy. Systems reward those whose lives resemble the patterns embedded in their training data. They penalise individuals whose circumstances fall outside modelled norms. The harms created are not moral judgements; they are computational defaults. The model’s inability to interpret variation becomes a burden for the person to correct. Individuals must perform predictability to remain trustworthy because unpredictability is indistinguishable from risk in a system that cannot recognise context.
Forced legibility is especially damaging because it is invisible. People rarely know which behaviours the system deems suspicious or unstable. They may attempt to behave more consistently, change how they use devices, avoid legitimate actions that appear anomalous, or limit their digital footprint to avoid triggering risk filters. This cognitive load corrodes autonomy. The individual no longer chooses behaviour freely; they choose behaviour that satisfies a model they cannot see.
The most profound harm is cultural. Forced legibility erodes the diversity of life patterns. Systems built around normative assumptions pressure individuals to conform. People living non-linear, multi-contextual, or irregular lives face a structural disadvantage. In effect, computational expectations reshape human behaviour at societal scale. The system does not simply interpret the world; it quietly disciplines it.
The solution is not to eliminate legibility but to shift its burden. Systems must accept that humans are complex and volatile. They must tolerate variation without interpreting it as instability. And they must rely less on behavioural inference and more on verifiable claims that reflect actual identity rather than desired patterns of behaviour.
The Economics of Harm: How Misinterpretation Creates Material Loss
Harm in digital systems often appears abstract, but its consequences are economic and concrete. Misinterpretation becomes material when systems control access to credit, employment, healthcare, education, public benefits, or mobility. A misclassified risk score can raise borrowing costs or block credit altogether. A misread employment history can eliminate job prospects. A misinterpreted transaction pattern can freeze financial accounts. A flawed eligibility system can cut someone off from essential services.
Economic harm accumulates because systems convert momentary anomalies into long-term disadvantage. A temporary income fluctuation can result in a credit penalty that persists for years. An algorithmic fraud flag can lock someone out of financial products indefinitely. A miscategorised transaction history can influence risk models used across multiple institutions. Once encoded, these decisions become part of the individual’s digital shadow, influencing outcomes beyond the original context.
These harms disproportionately affect individuals who already operate at the margins of economic stability. People navigating irregular employment patterns or living in transitional housing face the highest risk of misinterpretation. The very conditions that make life volatile also produce data patterns that systems misread. As a result, harm amplifies disadvantage. Systems designed to evaluate risk inadvertently intensify it.
The aggregation of economic harm compounds at the societal level. When misclassification becomes common, institutions lose access to accurate signals. Debt markets misprice risk. Labour markets misread talent. Insurance systems misjudge eligibility. Consumers lose trust in the fairness of digital governance. Economic coordination depends on interpretive accuracy, and when systems operationalize harm, coordination erodes quietly.
What makes the economics of harm particularly insidious is the absence of redress. Individuals often lack the ability to challenge algorithmic classifications because the underlying logic is opaque. Even when errors are detectable, institutions lack processes for correction. The economic losses become irreversible because the system has no memory of its mistakes. Economic well-being is reshaped by misinterpretation, and misinterpretation is reinforced by institutional inertia.
The only durable solution is to reduce the system’s dependence on behavioural inference. Verification provides a mechanism for presenting structured, authoritative truth. Verified claims reduce uncertainty, restrict the interpretive space where misclassification occurs, and create a baseline of trust that allows economic systems to function more accurately. Without this shift, economic harm remains an unavoidable consequence of digital decision-making.
Semantic Harm: When Systems Misread Meaning
Semantic harm arises when systems interpret language, behaviour, or content without understanding their meaning. Human communication relies on nuance, tone, context, humour, cultural signalling, and shared experience. Computational systems rely on tokens, patterns, and statistical associations. When systems operationalize language without comprehension, misinterpretation becomes systemic.
Content moderation systems illustrate this challenge vividly. They can detect patterns associated with prohibited content but cannot reliably distinguish between harmful speech and commentary about harm. They cannot parse satire, regional idiom, dialect variation, or cultural reference. As a result, they remove benign content while allowing harmful content to slip through. These decisions influence visibility, access, and expression, reshaping public discourse.
Semantic harm expands beyond language. Gesture recognition systems misinterpret movement patterns from individuals with disabilities. Biometric systems misread facial expressions across cultures. Sentiment models classify emotional states inaccurately for people whose communication patterns fall outside the dominant dataset. When systems misread meaning, they substitute statistical inference for comprehension.
The consequences spill into governance. A misinterpreted text message becomes evidence in a legal case. A misread sentiment score influences a welfare eligibility decision. An incorrectly flagged video triggers law enforcement escalation. Semantic harm becomes embedded in institutional pathways because the system’s misunderstanding is expressed as policy.
What makes semantic harm particularly damaging is its epistemic inertia. Systems cannot explain how they arrived at interpretive conclusions because their logic is statistical rather than semantic. Individuals cannot correct these interpretations because the system has no mechanism for integrating corrected meaning. The harm persists because the system has no capacity for narrative context.
The long-term consequence is a world in which communication must adapt to machine expectations. People self-censor, modify expression, avoid certain topics, or alter linguistic patterns to reduce risk. The computational reading of language becomes a form of linguistic governance. Semantic diversity contracts in favour of machine legibility.
Mitigating semantic harm requires shifting the system’s interpretive burden from raw behavioural inference to contextual, verifiable signals. Instead of guessing at meaning, systems must incorporate evidence of intent, context, and identity. This is not about making machines more human; it is about preventing machines from mistaking statistical patterns for semantic truth.
Harm Through Omission: The Cost of What Systems Do Not See
Digital systems make decisions based on the information they possess, not the information they lack. In human judgement, missing information triggers curiosity, caution, or inquiry. In computational judgement, missing information is often treated as a signal in itself. Absence becomes a form of presence. Missing data becomes presumed data. Omission becomes an inference pathway rather than an invitation to gather context.
This creates unique forms of harm because systems treat informational gaps as anomalies. People who fall outside standardised data pipelines—those who lack formal credit histories, long-term addresses, consistent employment records, or stable digital footprints—become difficult for systems to interpret. Rather than being classified as unknown, they are often classified as high risk. Their lack of visibility becomes evidence against them.
These harms accumulate disproportionately among populations whose lives intersect with institutional blind spots. Migrants, seasonal workers, people in informal economies, individuals re-entering society after incarceration, and those living in poverty or displacement often have fragmented or irregular data histories. Systems that cannot interpret these gaps label them unreliable or fraudulent. The harm is not caused by what the system sees but by what it fails to see.
Omission also affects context-rich domains. A system assessing eligibility for benefits may lack data on caregiving responsibilities that disrupt income patterns. A fraud-detection system may misinterpret a sudden change in behaviour without recognising a medical emergency. A biometric system may fail to recognise individuals whose features fall outside the norms of its training data. In each case, the absence of contextual data creates a false signal that disadvantages the person.
These harms become inevitable in systems that rely on inference as their primary interpretive mechanism. Omission becomes dangerous because systems cannot distinguish missing data from negative signals. Without mechanisms for individuals to present verified context, omissions become structural barriers. Harm emerges not from misinterpretation alone but from the system’s inability to recognise its own epistemic limits.
Designing for omission requires systems to treat missing data as a cue for verification rather than inference. Instead of filling gaps with assumptions, systems must request proofs that illuminate circumstances the model cannot infer. Omission should trigger dialogue, not penalty. Without this shift, the people most in need of institutional support become the most vulnerable to computational exclusion.
The Collapse of Contestability: When People Cannot Challenge the System
Contestability is the mechanism through which errors become visible and correctable. In human systems, contestability exists through appeal processes, caseworker discretion, and narrative explanation. In digital systems, contestability collapses because the logic governing decisions is opaque, the pathways for appeal are limited, and the system interprets itself as authoritative.
The collapse begins with opacity. Individuals rarely know why they were classified as high risk, denied benefits, rejected for credit, or flagged by a safety filter. The system does not explain its logic because it is not designed to generate explanations. Even institutions often cannot reverse-engineer the reasoning behind a model’s output. Without insight into the basis of a decision, individuals cannot contest it meaningfully.
Contestability also collapses because digital systems are not built to integrate narrative evidence. When a person explains the context behind an anomaly, the system has no way to incorporate that explanation. Human appeals processes may exist on paper, but they are often under-resourced or ineffective, especially when system decisions are considered final. The person is left without a pathway to correct misinterpretation.
The consequences are profound. A misclassification becomes destiny. An incorrect risk score becomes permanent. The absence of contestability traps individuals in a computational logic that they cannot understand or influence. It also undermines institutional legitimacy because fairness depends not on perfect accuracy but on the capacity for correction. Systems that cannot be contested accumulate harm silently, and the people affected lose trust not only in the system but in the institution behind it.
Rebuilding contestability requires more than adding appeal forms. It requires designing systems that expect to be wrong and provide mechanisms to update their understanding. Verification infrastructure supports contestability by allowing individuals to present authoritative evidence that contradicts model inference. Systems that can incorporate verified corrections regain the ability to recognise their own mistakes. Contestability is not a luxury; it is the foundation of accountable computation.
Why Unverified Inputs Generate Disproportionate Harm
Unverified systems rely on signals that lack grounding. They interpret patterns, proxies, and behaviours because they cannot validate claims. This reliance on inference creates harm not by malfunction but by design. When claims cannot be verified, systems fill epistemic gaps with probabilistic approximations. These approximations may work at population scale but fail at individual scale, where precision matters.
Unverified inputs generate disproportionate harm because they introduce ambiguity into systems that treat ambiguity as risk. A person without verifiable address history becomes suspicious. A person whose income cannot be validated becomes unstable. A person whose identity cannot be cryptographically confirmed becomes a potential fraud vector. The system treats unverifiability as a negative attribute because it cannot distinguish uncertainty from threat.
This dynamic creates a harmful asymmetry. Systems demand verifiability but do not provide mechanisms for individuals to meet that demand. They penalise people for lacking proofs without enabling them to obtain or present those proofs. The harm arises not from the absence of verification but from the absence of an infrastructure that makes verification accessible.
Verification changes this dynamic by shifting systems from inference to truth. When identity attributes, authorisations, and entitlements are presented as cryptographically verifiable claims, systems no longer need to guess. Uncertainty becomes manageable because claims can be confirmed directly. Harm decreases because the system’s interpretive burden decreases. The relationship between institution and individual becomes grounded in evidence rather than speculation.
The disproportionate harm of unverifiable inputs is a design choice, not an inevitability. When systems rely on inference, they operationalize bias and misinterpretation. When they rely on verification, they operationalize truth. The distinction determines whether harm becomes systemic or contained.
Designing Systems That Contain Harm
Containment recognizes that harm cannot be eliminated but can be prevented from propagating. It requires architectural safeguards that limit how far a misinterpretation can spread and how deeply it can be institutionalized. Containment transforms harmful outputs into correctable events rather than structural failures.
The first containment principle is role separation. Systems should not collect, interpret, and enforce decisions using the same logic. Separation creates checkpoints that prevent a single misinterpretation from cascading across layers. It mirrors principles in financial governance and cybersecurity, where no single actor is permitted to control all aspects of a critical process.
The second principle is verification-first identity. Systems that rely on verifiable claims reduce dependence on behavioural interpretation. Verified attributes create stable reference points that limit the interpretive space. Containment becomes easier because truth becomes portable across institutions.
The third principle is contextual override. Systems must incorporate structured mechanisms that allow individuals to supply context when their behaviour diverges from model expectations. Overrides prevent legitimate anomalies from being treated as risk signals. They also create opportunities for systems to learn when their assumptions fail.
The fourth principle is auditability. Systems must produce logs that allow institutions and individuals to reconstruct decisions. Auditability enables accountability by making harm traceable. Without auditability, harm becomes diffuse and uncorrectable.
The fifth principle is revocability. Systems must allow individuals to update or revoke credentials, correcting outdated or erroneous information. Revocability prevents harm from becoming sticky, ensuring that systems remain aligned with evolving truth.
Containment does not require perfect systems; it requires systems that are incapable of causing catastrophic harm through routine operation. Architecture becomes the mechanism through which society limits the reach of computational misinterpretation.
The Moral Weight of Design: When Architecture Becomes Ethics
Digital architecture functions as a moral instrument. Every classification schema, threshold parameter, and optimisation target expresses a value judgement. Design decisions become institutional choices about who to trust, who to exclude, who to monitor, and who to believe. These choices shape the lived experience of millions of people. They determine who faces friction, who receives opportunity, and who encounters suspicion.
The moral weight of design arises because computational systems convert normative assumptions into enforceable rules. Designers may not view themselves as shaping justice, equity, or dignity, but their decisions carry these consequences. When a system misinterprets someone, it is not merely inaccurate—it becomes unjust. When a system imposes behavioural conformity, it is not merely strict—it becomes coercive. When a system lacks contestability, it is not merely incomplete—it becomes authoritarian.
Ethical frameworks cannot repair harm after deployment. Ethics must be operationalised through architecture. This requires system designers to recognise that they are creating institutional actors whose decisions will affect people materially. It requires institutions to treat accuracy, transparency, contextuality, and contestability as moral commitments, not optional enhancements. And it requires a governance vision that sees verification infrastructure, oversight mechanisms, and correction pathways as ethical obligations.
Architecture becomes ethical when it encodes respect for personhood. It becomes harmful when it encodes convenience at the expense of humanity. The difference lies in the assumptions baked into the system’s design and the mechanisms provided for humans to reclaim agency when systems misinterpret them.
Conclusion: Protecting the Person, Not the Pattern
When systems operationalize harm, they do so because they act on patterns rather than people. They convert guesses into governance and assumptions into institutional logic. They enforce rules without context and penalise deviation without understanding. This is not a failure of computation but a failure of design. Systems that interpret people without grounding recreate the fragility of human bias at machine scale.
Protecting the person requires shifting digital infrastructure away from inference-based identity toward verification-based identity. It requires contestability that restores agency, context mechanisms that restore meaning, auditability that restores accountability, and role separation that restores institutional balance. These are not mere enhancements; they are the foundations of humane digital governance.
Systems will continue to play an increasing role in managing eligibility, opportunity, and risk. The question is whether these systems will understand people on their own terms or reduce them to signals that fit model expectations. The future of digital society depends on whether institutions choose to align architecture with human complexity rather than computational convenience.
A world where systems operationalize harm is not inevitable. It is a design choice. And it is a choice that can be reversed—through verification, through architecture, and through a commitment to building systems that protect the person, not the pattern.



By mistake I deleted the original comment while responding to it. I'll use the content from the email notification to put the original and then my response to it. I do apologize to @Rainbow Roxy for my tardiness.
Original comment: "Regarding the topic of the article, this was a realy insightful piece. I found your point about harm becoming operational very compelling. Could you elaborate a bit more on how these invisible feedback loops in widely deployed systems can be identified or interupted, especially when individuals can't see or contest them? It feels like a crucial challenge for responsible AI development."
My response:
Thanks for this, and you’re pointing at the hard part: in scaled systems, “feedback loops” are rarely visible at the individual level, so we can’t outsource detection to the people being affected.
The practical move is to treat these loops like an SRE problem plus a governance problem: **make them observable, then make them interruptible**. This leads to two specific approaches
On *identifying* invisible loops: you instrument outcomes the same way you instrument latency. That means cohort-level telemetry (who gets denied, downranked, flagged), drift monitoring, and “harm leading indicators” (appeal rates, reversal rates, complaint clustering, sudden distribution shifts). Then you pressure-test causality with A/B holds, shadow deployments, counterfactual evaluation, and targeted audits on slices that are historically under-measured.
On *interrupting* loops: you need circuit-breakers. Rate limits for automated enforcement, human review for high-impact actions, cool-downs when a metric spikes, and “do-no-amplify” constraints (don’t use model outputs to generate the next round of training labels without controls). Most importantly, you build **contestability rails**: decision receipts, a clear appeals path, time-bound SLAs, and logging that supports independent review, not just internal debugging.
I think that if a system can’t produce a defensible decision trail and a workable appeal path, it’s not “AI-powered” — it’s just **harm at scale with better branding**.
In the new year I am going to ponder over this a bit more to ensure that my thinking is clearer and more specific.