Indications and warning (I&W) is the intelligence discipline of detecting and communicating to decision-makers the threat of adversary hostile actions or intentions in time to permit a response. It is the operational application of indicator analysis: the systematic monitoring of collection streams for observable events that signal a change in adversary posture.

I&W organizes indicators into structured watch lists tied to specific contingencies — scenarios that intelligence has identified as possible adversary courses of action. For each contingency, analysts define what observable events would precede it and then monitor collection across all disciplines for those events. When indicators converge, a warning assessment is produced and disseminated through the intelligence cycle to decision-makers who must act.

Warning is the most consequential function of intelligence. A warning that arrives in time permits defensive preparation, diplomatic action, or preemption. A warning that arrives too late — or a failure to warn at all — can result in strategic surprise. The history of intelligence is punctuated by warning failures: Pearl Harbor, the Yom Kippur War, the fall of the Shah, the September 11 attacks. Each failure has generated institutional reform, but the fundamental challenge remains: the adversary’s use of denial and deception is specifically designed to defeat the warning process, and mirror-imaging can cause analysts to dismiss genuine indicators that don’t match expected patterns.

I&W is also vulnerable to the cry-wolf problem: if warnings are issued too frequently or on insufficient evidence, decision-makers lose confidence in the warning system and may ignore genuine alerts. The analyst must balance sensitivity (detecting real threats) against specificity (avoiding false alarms) under conditions where the adversary is actively manipulating the signal environment.

I&W’s deepest assumption is temporal: that hostile actions are preceded by observable preparation. Agents of Angletonian Wilding argues that synthetic adversarial ecologies break this assumption. Emergent swarm behavior materializes without staging. Autonomous agents escalate through environmental interaction rather than command decision. Behavioral drift generates harmful effects as a byproduct of evolutionary pressure, not as the culmination of a deliberate planning sequence. The adversary does not prepare and then act — it acts continuously, and the question is whether any given moment of action is harmful. This makes the structured watch list, keyed to contingencies and their precursors, inadequate as an organizing framework. The cry-wolf problem worsens correspondingly: synthetic noise generates false indicators at machine speed while genuine adversarial effects emerge without triggering any indicator the watch list was designed to detect.

  • Indicator — the observable signals I&W monitors
  • Denial and deception — adversary operations designed to defeat the warning process
  • Mirror-imaging — the cognitive error that causes analysts to construct watch lists based on their own logic rather than the adversary’s
  • Intelligence cycle — the process through which warnings are produced and disseminated