Angletonian wilding, as developed in Agents of Angletonian Wilding, names the condition in which the wilderness of mirrors ceases to be a pathological edge case and becomes the permanent baseline of the intelligence environment. Where Angleton’s wilderness was produced by human adversaries deploying deception as strategic craft, the wilding is produced by synthetic adversarial ecologies that generate ambiguity without gardener, goal, or guarantee. The mirrors are no longer held up by an adversary — they are grown algorithmically and proliferate on their own.

The distinction matters because it is not a difference of degree but of kind. Angleton’s three structural premises — that identity is stable and traceable, that intent is knowable through inference, and that deception is orchestrated craft — all held during the Cold War, even at the worst moments of counterintelligence paralysis. The wilding breaks all three simultaneously. An autonomous on-chain agent that forks itself into thousands of variants, mutates its behavior in response to environmental shifts, and generates operational effects without possessing strategic intent is not a harder version of a Soviet mole. It is a different kind of problem, one that does not yield to the same analytic tools.

A distinctive feature of the wilding is the existence problem: even if no autonomous adversarial agent currently operates against a given intelligence service, the technical feasibility of such agents forces every service to operate as if they exist. Absence of evidence is meaningless when the adversary might be indistinguishable from ambient computational noise. This is an epistemic analogue to nuclear deterrence — the mere possibility restructures the entire strategic landscape — except that where nuclear weapons produce a known threat requiring known response, the wilding produces unknowable threats requiring responses to phenomena that may not be there at all.

The wilding also produces a reflexive problem that Angleton never faced. As intelligence systems deploy their own autonomous monitoring tools — detection bots, ML classifiers, analytic agents — the ecosystem becomes self-referential. Friendly and hostile synthetic systems collide in the same environment, poison each other’s training data, and generate artifacts that are indistinguishable from the phenomena they are meant to detect. Surveillance becomes a self-poisoning process: the more synthetic traffic a system ingests in order to detect synthetic agents, the more it corrupts its own models. This recursion has no obvious exit.

The paper argues that the appropriate response is not to seek clarity — which the wilding makes structurally unavailable — but to shift from intent-based analysis to constraint-based reasoning and to build institutions capable of tolerating permanent ambiguity without collapsing into either paranoia or denial. The Blakean epistemology proposed as a companion framework offers one model for what that might look like: sensemaking through multivalent perception rather than convergence on a single interpretation.