Skip to content

2026-03-05 08:31

In my timeline RN I can see content about how AI was used to help the US military pick targets, about how that AI picked the wrong target (a school, killing children), and this sentence from the January 1985 issue of Family Computing Magazine:

“Even the best home financial software can lead the unwary user astray. Of course, if you enter the wrong interest rate on your potential home mortgage, a program’s likely to give you some bad advice about whether or not to buy the house”


Updated for 2026: …whether or not to bomb the house.

I think a lot of folk misattribute their grievance when they complain about AI. Taken at face value, it’s pretty bad technology. But it’s not bad out-of-nowhere. It’s an amplification and combination of a LOT of bad things that came before it, and helped lead to it.

And I think it’s useful when things hit a tipping point - like a person’s individual friction with computation and inference - to look back and see, what led to that tipping point, and what am I doing that reinforces its existence, prior to the thing that came after things tipped?

(I could analogize this to how many people want to abolish ICE, and how few of them extend that to abolishing police, and how few bring that toward things like abolishing settlerism, adult supremacy, etc.)

Like, if you want to make sure you and other folk don’t use AI, don’t craft some moral prohibition: “don’t use AI or you’re bad.”

Take away the conditions that make AI anything worth existing at all. And there’s a lot of those, from the conditions of modern marketing to post-democratic governance to finance and scientific empiricism.

This is something I really only see discussed by like, Indigenous and Black radical critiques of AI: rather than the normative anti-AI stance of “just don’t commit crime,” examining the conditions that are leading folk to choose crime, in the first place, and unpacking them.

Most settler-folk I see approach the issue bounce off of it like granite dropped on cement, because to look more deeply at what’s wrong with AI means to look more deeply at their own systems of language, knowing, evidence, inference, and that is, in the settler way, who a person is, and questioning that is generally sacriligious… and also, a lot of reasons people critique AI are just generally applicable to people: “A thing that knows can’t know what it doesn’t know” is a pretty classic philosophical conundrum that wasn’t caused by AI, even if it being true makes AI suck.

Anyway. As someone who’s used computers a long time, and worked with language and computers before and after LLMs came around: the issue is quantification and abstraction, not AI.

I think On the Barricades of the Incalculable is still my favorite runthrough of these ideas, for folk who wanna learn more on this perspective: https://www.researchgate.net/publication/349105025_On_the_Barricades_of_the_Incalculable

Relations

Date created
Publish to
  • emsenn net
Specialization
research