A lot of contemporary criticism of AI leans on “entropy” as if it were a physical proof that statistical systems must inevitably decay into falsehood. But that reading of entropy is not actually in thermodynamics, information theory, or machine learning. It is a cultural inheritance that predates all three.

The second law of thermodynamics is a narrowly scoped claim about energy distributions under specific constraints. It does not say that systems become worse, less meaningful, or less truthful. The move that turns entropy into decay, degeneration, or moral failure enters through nineteenth-century rhetoric, especially in Victorian Britain. When figures like William Thomson (Lord Kelvin) popularized the “heat death of the universe,” the mathematics concerned gradients and work; the language framed this as the universe “running down” toward futility. That framing drew directly on Christian eschatology: an ordered creation falling toward an exhausted end state. Entropy became a story about destiny, not just a state function.

Once that rhetorical layer was in place, it spread easily. Entropy slid from physics into culture, epistemology, and morality. Disorder was equated with confusion, confusion with unreliability, unreliability with falsehood. By the early twentieth century, writers like Henry Adams were explicitly using entropy to argue that modern knowledge itself was fragmenting and losing truth. Physics had become an authority anchor for pessimism.

This is precisely the inheritance that shows up in AI discourse today. When someone says “because thermodynamics” to argue that models must lie or collapse, they are not making a technical claim about learning systems. They are invoking that older moralized entropy narrative. Statistics gets mapped to noise, noise to heat, heat to entropy, entropy to decay, decay to untruth. The conclusion — AI is inherently untrustworthy — arrives before the analysis; thermodynamics just supplies the costume.

Claude Shannon saw this problem coming. When he introduced information entropy, he explicitly warned that it had no semantic meaning. It measured uncertainty in symbol distributions, not truth, meaning, or correctness. That warning is routinely ignored in popular discussion, because the cultural meaning of entropy had already solidified. Entropy “means” decay, so people keep using it to smuggle in normative conclusions it does not support.

What makes this especially ironic is that the critique itself often does exactly what it accuses AI of doing. It performs pattern completion over a familiar cultural dataset. Given the prompt “new powerful technical system,” the continuation “inevitable downfall justified by physics” is an extremely high-probability completion in Western intellectual history. Fire, heat, hubris, entropy, destiny. The argument feels right because it fits the training data of inherited stories, not because it has been derived from the system under discussion.

What gives this pattern its rhetorical force is that it performs an illusion of explanatory depth at exactly the moment when systems of meaning-making feel unstable. Invoking physics, inevitability, and deep laws of nature creates the sensation that something complex has been explained, when in fact a familiar narrative has merely been completed. The power of such explanations is not their correctness but their closure: they end inquiry. In a cultural environment already saturated with opaque technical systems, explanations that feel deep, final, and impersonal are compelling even when they are structurally empty. That same mechanism — producing confidence without grounding — is also a legitimate reason people distrust AI systems.

And it is precisely why modern systems are engineered the way they are. They are constrained, corrected, externally anchored, and evaluated locally. When they fail, they fail for specific reasons: misaligned objectives, gaps in data coverage, insufficient supervision. None of this follows from the second law. It is just complex software behaving like complex software.

So when you hear entropy used as a punchline in AI critique, it is worth noticing the symmetry. The charge is “you are just reproducing statistical regularities from your data,” but the critique itself is reproducing statistical regularities from a much older dataset: a cultural miasma of stories about heat, decay, and fate. That does not make the concern illegitimate, but it does mean the physics is doing rhetorical work, not analytical work. And confusing the two is exactly the mistake Shannon warned us about — and exactly what people find so unsettling about AI in the first place.