Pearl Harbor: Warning and Decision (1962) by Roberta Wohlstetter is the discipline’s foundational study of intelligence failure. Wohlstetter demonstrated that the attack on Pearl Harbor was not a failure of collection — the indicators of Japanese military preparations were available across multiple intelligence channels — but a failure of analysis: the signals of the impending attack were lost in a “noise” of competing signals, false alarms, and plausible alternative interpretations.

Core argument

Wohlstetter’s central insight is that the signal-to-noise problem is structural rather than correctable. In hindsight, the signals pointing to the Pearl Harbor attack are obvious; in real time, they were embedded in a mass of ambiguous, contradictory, and misleading information. The analyst cannot simply “pay more attention” — the problem is not inattention but the impossibility of distinguishing signal from noise before the event the signal predicted has occurred. Every genuine signal is accompanied by noise that, at the moment of analysis, is indistinguishable from it.

The implication is that intelligence failure is an inherent possibility of the intelligence enterprise, not a correctable deficiency. Better collection, better analysis, and better organization can reduce the probability of failure but cannot eliminate it. This finding — uncomfortable for intelligence organizations that must promise their governments reliable warning — grounds the discipline’s structural self-understanding.

Influence

Wohlstetter’s framework became the discipline’s default explanation for surprise: the signal-to-noise model appears in every subsequent analysis of intelligence failure, from the Yom Kippur War (1973) through 9/11 (2001). The framework’s influence extends beyond intelligence to risk analysis, organizational theory, and information science.

Limitations

The signal-to-noise model treats the analytical challenge as fundamentally informational — the problem is sorting signals from noise. The 2026 Iran war analysis suggests a deeper problem: the intelligence system’s categories may not encode the properties of the adversary that determine outcomes, meaning that no amount of signal-noise discrimination will produce the correct assessment because the relevant information is not in the system’s signal space at all. This is the legibility critique: the issue is not misreading signals but possessing categories that cannot encode what matters.