The intelligence-policy disconnect describes the structural condition in which intelligence assessments are neither wrong nor politicized but simply irrelevant to a policy process that operates on criteria the assessments do not address. Where the analyst-policymaker relationship as Sherman Kent conceived it assumes a policymaker who needs the analyst’s estimate and may be corrupted by the temptation to influence it, the disconnect identifies a more fundamental problem: a policymaker who does not need the estimate at all, because the decision is being made on grounds the estimate cannot adjudicate.

Paul Pillar articulated this condition most clearly: intelligence assessments rarely drive policy decisions. Policymakers arrive at positions through political conviction, ideological commitment, institutional momentum, and domestic political calculation. Intelligence is used instrumentally — to justify decisions already made, to build public support, or to legitimate courses of action with their own political logic — but it does not determine the decisions themselves.

The canonical failure modes and the fourth configuration

The intelligence discipline’s case literature recognizes three canonical failure configurations:

  1. Wrong assessment: The intelligence community produces an estimate that proves factually incorrect (Iraq WMD 2002 — high-confidence assessment that weapons existed when they did not).
  2. Right assessment, unheeded warning: The intelligence community produces a correct assessment that policymakers fail to act on (pre-9/11 threat reporting — the system was “blinking red” but no defensive action was taken).
  3. Right assessment, wrong framework: The intelligence community’s assessment is factually correct but interpreted through a flawed analytical framework (Yom Kippur War — the Israeli kontzeptzia that Egypt would not attack without air superiority caused indicators to be read as exercises rather than war preparation).

The intelligence-policy disconnect adds a fourth:

  1. Right assessment, structural irrelevance: The intelligence community’s assessment is correct and honestly produced, but the policy decision is made on criteria the assessment does not address, rendering it moot.

The 2026 Iran war may exemplify this fourth configuration. The intelligence community assessed that Iran was not building a nuclear weapon. The policy decision to strike was made on capability criteria — Iran could build a weapon — that the intent-focused assessment did not address. The assessment was not wrong, not politicized, not ignored in the sense of the pre-9/11 warnings. It was answered a question the policymaker was no longer asking.

Structural dynamics

The disconnect is not a bug in the intelligence system — it is a structural feature of the relationship between knowledge and power. Intelligence produces epistemic products (what is the case? what is likely?) while policy produces action products (what shall we do?). The mapping between them is not determined: the same assessment can support different policies, and different assessments can converge on the same policy. Kent’s framework assumed the mapping was tight — know the world correctly, and the right policy follows. Pillar demonstrated that the mapping is loose — policy has its own logic, and intelligence is one input among many, often not the decisive one.

The disconnect becomes pathological when the intelligence community does not know it has been disconnected — when analysts continue producing assessments under the assumption that they are informing decisions, while the decisions have already been made on other grounds. In this condition, the intelligence enterprise becomes a legitimation apparatus rather than an advisory function, and the analyst’s professional identity — the independent scholar-advisor — becomes a fiction maintained for institutional and psychological reasons rather than operational ones.

Implications for reform

If the intelligence-policy disconnect is structural rather than correctable, the reform literature’s prescriptions — better analysis, better communication, better integration with the policy process — address the wrong problem. The question is not how to make intelligence more useful to policymakers who want to use it, but whether and under what conditions policymakers can be expected to use it at all.

This does not mean intelligence is pointless. Operational intelligence — targeting, force protection, tactical warning — retains its value regardless of the estimative enterprise’s relevance to strategic policy. The disconnect is specifically between estimative intelligence and strategic policy decisions. The irony the 2026 case illustrates is that the same intelligence community whose strategic estimates were bypassed produced operational intelligence of extraordinary precision. The system’s two functions — understanding the world and enabling action in it — operated at entirely different levels of relevance to the decision-makers they served.