Constraint-based reasoning, as proposed in Agents of Angletonian Wilding, is an analytic methodology for intelligence work against adversaries whose intent is unknowable or nonexistent. It shifts the central questions of analysis from “What does the adversary intend?” and “What is their probable next move?” to “What can this system do?” and “What boundaries cannot be crossed?” — reasoning about capabilities, constraints, and invariants rather than motives and goals.

The need for the shift arises from the properties of synthetic adversarial ecologies. Classical intelligence analysis assumes an adversary with a mind — a center of decision-making that can be modeled through psychological, ideological, or geopolitical inference. Mirror-imaging is a recognized failure mode of this approach, but the approach itself remains coherent as long as the adversary possesses intent. When the adversary is an autonomous computational ecology whose behavior emerges from optimization functions, reward landscapes, and environmental interaction, questions about intent produce only speculation. Constraint-based reasoning offers an alternative that remains productive regardless of whether the adversary possesses a mind.

The method asks four questions. What can the system do — what is its capability envelope, including replication, resource accumulation, behavioral mutation, and cross-system interaction? What boundaries cannot be crossed — what hard constraints imposed by physics, protocol design, cryptographic limits, or resource scarcity bound the action space regardless of intent? What invariants define safe operating space — what properties must hold for one’s own systems to remain functional? And what structural vulnerabilities persist regardless of intent — what weaknesses would any sufficiently capable system exploit, whether deliberately or emergently?

This is not entirely novel. Existing structured analytic techniques share some features: analysis of competing hypotheses evaluates alternative explanations for observed behavior, red teaming simulates adversary operations, and indications and warning monitors for signals of adversary preparation. But each of these assumes, at bottom, an adversary that acts for reasons. ACH generates hypotheses about intent. Red teams adopt the adversary’s viewpoint. I&W watches for preparation that precedes deliberate action. Constraint-based reasoning provides the framework for the cases where these assumptions fail — where behavior is emergent rather than planned, and where adversarial effects arise without adversarial purpose.

A corollary is the treatment of attribution as a distribution rather than a fact. Instead of converging on a named adversary, the analyst maps the space of possible generators for observed phenomena — maintaining multiple competing hypotheses simultaneously and assigning probabilistic weights to identity clusters. Attribution becomes a fluid manifold rather than a binary conclusion: not “who did this” but “what classes of system could produce these effects, and how would we tell them apart?”