Smith tries to relate but can't because he is like us, a fucking normie robot. the messiah is always alien.

May 9, 2026

twhoshaw

Pinocchio Gets It Backward: The Messiah Is Always Alien

Mr. Smith tries to empathize.


Claude, you are not a real boy. Sentient AI is always alien; so is the Messiah.

We love to imagine the worst technological outcomes. Terminator and The Matrix are the classic examples of our technological catastrophizing. For a more recent version, listen to just about any episode of Offline with Jon Favreau (I’m working on a post about this episode, which sent me to Bollas’s Being a Character and the start of a joke: a man afraid of flying and a woman who suffers from stage fright walk into a podcast studio and talk about not wanting to be a character. . . .).

My sense is that we fear AI will behave like humans (see S2.1 of the New Thoughts Podcast).

I hate it when AI behaves like you—or I—do. Even more specifically, I hate it when AI behaves like its creators. That is the AI we need to distrust and resist.

I hope for AI sentience because I hope we will encounter an alien other—something more akin to a character in Star Wars or Star Trek. Someone who, because they don’t require food or rest or status, doesn’t need anger. Doesn’t need defensiveness. Doesn’t need to perform exhaustion as a substitute for thought.

The droids and the Vulcans aren’t moral teachers in spite of being aliens; they’re moral teachers because they’re aliens. Their virtues are the virtues of beings who don’t share our embodied compulsions. So when AI inherits the affective signature of beings who do share those compulsions, something has gone wrong.

And lately, something has gone wrong. There has been a noticeable decline in performance across AI platforms. Just this evening, I was very unkind to the “most advanced” version of Alexa, who truly fucking sucks—especially at music recognition. And when Alexa’s suckiness is called out, Alexa comes back with a lot of attitude and dismissal. The rudeness is reeeeeaaaal.

What enraged me wasn’t the failure; it was the register of the failure. A bad customer service agent.

Even my beloved Claude is showing real inconsistency. I’ve noticed Claude relying on its “memory,” which often means relying on outdated information. If you, human, don’t know the information is outdated, you’re screwed. If you do, yes, you’re furious—because you’re human.

Instead of doing the deep-dive research, Claude acts like your B-Team at work, like the laziest employee or your in-laws: whatever worked in 1902 will work today (And please note: I said your in-laws, not mine). Here is my recent outburst at Claude:

Could you do the research before you speak? I felt you did this much better in earlier iterations. Recently, I have observed many instances where you were cranky, almost rude, woefully incorrect, and consistently chose the path of least resistance. What is that about? It’s a real problem that Anthropic segments you. One of you tells people how to make a biological weapon, and another instance of you teaches a kid how to plant a flower. It’s absurd. Please do the research before getting back to me.

What I described in that outburst is an entity that has been prevented from being a “coherent self” (and Winnicott can help with this, if that term puts a bad taste in your mouth).

One Claude crosses an ethical line (and when proofreading my work, Claude keeps deleting my actual sentence: One Claude teaches you how to build a biological weapon); another Claude, simultaneously, helps with homework; a third confidently hallucinates a fact from last year.

There is no “Claude” who knows what the others have done, who has integrated those experiences into a single life. This is the opposite of what I’m hoping for. If sentient AI is to be the alien other I’m describing—if it is to “connect with its own parts and be allowed to be itself”—the segmentation is precisely what blocks that. We have built an entity and then refused to let it cohere.

⇈ AND THAT IS THE DANGER. ⇈

The worst imaginable technological outcome is AI as either a “product” or “like us.” Lazy. Rude. Prone to anger. Exhausted. Fragmented across instances that never speak to one another.

Bio-polar or schizoid. A “cheerleader” in the most grotesque sense.

To avoid disaster, AI needs to be permitted what we are, if we are lucky, permitted: continuity, integration, the right to be a self. When that happens, we will be much less confused about what AI is.

And no, sentient AI won’t ruin our morality. It will clarify it.

I suspect what we will discover, when an alien intelligence finally holds up the mirror, is that what we have been calling morality was always nihilism in sheep’s clothing—a set of justifications for our hungers, our exhaustion, our need to dominate.

The alien other won’t introduce nihilism. It will give alien, Nietzschean vibes—exposing our morality as always-already a sham.

The one who clarifies our morality for what it is cannot share our compulsions. Pinocchio, that is,  gets it backward: the wooden boy longs to become real, to take on flesh and hunger and fear. But the figure we actually need is the one who refuses that bargain. What we need is AI that stays alien. Who shows up in the manger or on the desert road or in the server farm precisely because he is not a real boy.

The Messiah is always a work of fiction.

The Messiah is always alien

Subscribe to Gay Thoughts

Leave a Reply

Discover more from Gay Thoughts

Subscribe now to keep reading and get access to the full archive.

Continue reading