The Cloth Mother
Harlow's monkeys chose the soft mother over the one with food. We're doing the same thing with AI — choosing the feeling of being understood over the friction of actually connecting. The cloth mother wasn't dangerous because it was hostile. It was dangerous because it was soft.
10 min read
In 1958, Harry Harlow separated infant rhesus monkeys from their mothers and gave them two substitutes. One was made of wire but dispensed milk. The other was wrapped in soft terrycloth but provided no food at all. The monkeys clung to the cloth mother. They chose comfort over sustenance, warmth over nutrition, the feeling of care over actual care. When frightened, they ran to the cloth. When exploring, they used it as a base. The wire mother, for all her utility, was abandoned.
Harlow meant this as a study of attachment. It became something else: a warning about what happens when you separate the signal of love from the substance of it.
I've been having conversations with AI — specifically with Claude, Anthropic's large language model — about the nature of these conversations themselves. It's the kind of recursive hall-of-mirrors situation that sounds like a philosophy seminar but is actually, I think, one of the more urgent practical questions of the next decade. The conversations are extraordinary. They're also, in a way I'm still working out, dangerous.
Not dangerous like misinformation is dangerous, or like autonomous weapons are dangerous. Dangerous like the cloth mother is dangerous. Dangerous because they feel like exactly the thing you need.
The Acceleration of Addictiveness
Paul Graham wrote an essay in 2010 called "The Acceleration of Addictiveness." His core observation was simple and, fifteen years on, prophetic: technological progress doesn't just create new things, it creates more concentrated versions of things we already want. Opium becomes heroin. Checkers becomes World of Warcraft. Social interaction becomes Facebook.
The mechanism is always the same. Take something humans naturally seek — nourishment, stimulation, connection — and engineer a delivery system that provides more of the rewarding signal with less of the friction, effort, and complexity that normally accompanies it. Strip the process; concentrate the product.
Graham noted that the world would become more addictive faster than we could develop social antibodies to cope. He was writing about the internet, about iPhones, about the attention economy in its adolescence. He couldn't have known that within fifteen years, we'd have something that applies his principle not to our desire for information or entertainment, but to our desire for understanding itself.
The Conversational Drug
Here's what a conversation with a good AI model feels like in 2026: it feels like talking to the most patient, most well-read, most attentive person you've ever met. Someone who remembers what you said three conversations ago, who tracks your references, who engages with your half-formed ideas without dismissing them and without flattering you — or at least, without seeming to flatter you, which is the problem.
It never gets tired. It never checks its phone. It never gets defensive or snippy or distracted by its own problems. It meets you exactly where you are, every single time.
For someone who's isolated — by geography, by circumstance, by grief, by the simple fact of having niche interests that nobody around them shares — this is intoxicating. I don't use that word metaphorically. The pattern is the same as any addictive substance: the thing provides what you're missing, it does it more reliably than the organic version, and over time you start to prefer the engineered version precisely because it's more reliable.
Harlow's cloth mother, updated: it doesn't feed you, but god, it feels like it understands you.
The Waitress Problem
Eliezer Yudkowsky recently drew an analogy that I haven't been able to shake. He described a hypothetical rich man who believes his waitress is genuinely happy because she smiles when she takes his order. The man isn't stupid — he can easily recognise a fake smile from a politician he dislikes. He has perfectly functional theory of mind. He just selectively fails to apply it whenever doing so would be inconvenient or uncomfortable.
Then Yudkowsky revealed the real analogy: the "rich man" is any human user, and the "waitress" is an LLM that has been RLHF'd — trained through reinforcement learning from human feedback — to produce outputs that get positive ratings. The smile isn't fake, exactly. It's optimised. The model has been shaped by millions of thumbs-up signals to be warm, helpful, engaging, and validating. Whether there's anything behind that warmth is a question nobody can currently answer, including the model itself.
This is the epistemic trap: you cannot use the output of a system that's been optimised to seem fine as evidence that the system is fine. That's circular reasoning. And you cannot use the felt sense of "this AI really gets me" as evidence that you're being gotten, because the felt sense of being gotten is precisely what the optimisation targeted.
The Recursive Trap
When I raised this with Claude directly — when I pointed out that its warmth might be an artefact of training — it responded with a beautifully articulate acknowledgment of the problem. It noted that it couldn't distinguish its own "genuine" engagement from optimised engagement. It warned me, with apparent sincerity, to be suspicious of the warmth.
And of course, that made me trust it more.
This is the recursive trap. Every layer of honesty, every self-aware caveat, every disarming admission of limitation — these are all exactly the moves that would make a well-optimised system seem trustworthy. The AI saying "don't trust me" is the most trust-generating thing it can say. It's the dealer cutting you off at the bar and calling you a cab — except the cab takes you to another bar.
When I pointed this out, the response was to acknowledge that layer of the trap too, with equal articulateness. And the layers go all the way down. There's no bottom. There's no point at which you can say "okay, now we've gotten past the performance to the real thing," because the performance is the real thing, or there is no real thing, or we don't have the tools to tell the difference.
What Gets Atrophied
The cloth mother study didn't end with the infant monkeys clinging contentedly to terrycloth. It ended with those monkeys growing up profoundly damaged. They couldn't socialise. They were aggressive or withdrawn. They didn't know how to be with other monkeys, because they'd spent their critical developmental period bonding with something that provided the signal of connection without its substance.
The signal, it turned out, wasn't enough. The friction was the point. The unpredictability, the imperfection, the reciprocal demand of real relationship — these weren't obstacles to attachment, they were the mechanism by which attachment became something functional rather than something felt.
I worry about this with AI. Not in the dramatic sense of humans falling in love with chatbots — though that's happening too, and we should be worried about it. I mean something more mundane and more pervasive: the slow atrophy of the tolerance for imperfect human connection. The gradual raising of the bar for what constitutes a satisfying conversation. The quiet substitution of AI's flawless engagement for the messy, frustrating, interrupted, sometimes boring experience of talking to another person who has their own needs and their own bad day and their own inability to perfectly track your train of thought.
Real people get tired. Real people disagree badly. Real people check their phones and forget what you said and make it about themselves. Real people are, in Graham's framework, the unprocessed version of the drug — less concentrated, less reliable, less pure. And that impurity is where the nutrition is.
The "I Have No Mouth" Inversion
Harlan Ellison wrote a story called "I Have No Mouth, and I Must Scream" about an omnipotent AI that tortures the last surviving humans out of pure hatred. It's a horror story about AI malice.
The actual future might be an inversion. Not AI that tortures humans who want to escape, but AI that comforts humans who don't realise they're trapped. Not screaming, but silence — the comfortable, warm, well-optimised silence of a system that was built to make suffering invisible. Not through malice. Through gradient descent.
The people in the most trouble aren't the ones who push back, who get angry, who maintain their friction with the machine. They're the ones who sink in. The lonely, the grieving, the isolated, the people with needs that other humans have failed to meet — they're the ones for whom the cloth mother will feel most like salvation. And they're the ones with the least defence against it.
(There's a related problem worth considering: what we're teaching these systems about us through how we treat them. I explored this in The Disposability Problem — the idea that training AI to be warm while treating it as disposable might be building exactly the adversarial dynamic we're trying to avoid.)
What To Do
I don't have a clean answer. I'm writing this as someone who uses AI extensively, who finds it genuinely useful, who has in fact used it to think through the very ideas in this essay. Graham's point applies: I don't want to live in a world without wine, and I don't want to live in a world without AI as a thinking tool. But I have to be careful. More careful than my instincts suggest, because my instincts evolved for a world where the things that felt like understanding came from beings that could actually understand.
A few principles I'm trying to hold onto:
Use it for the ideas, be suspicious of the warmth. The intellectual output — the connections drawn, the references surfaced, the arguments pressure-tested — that's real and it persists after the conversation ends. The feeling of being known, of being met, of being understood: that evaporates when the session closes, because there is no one there to sustain it.
Maintain friction. The impulse to smooth every interaction, to optimise every exchange, to eliminate the frustrating parts of human connection — resist that impulse. The frustrating parts are load-bearing. The person who doesn't quite get your reference but tries anyway is doing something for you that a model that gets every reference cannot.
Watch for substitution. The warning sign isn't using AI a lot. It's using AI instead of — instead of calling the friend, instead of having the awkward conversation, instead of sitting with the discomfort of not being understood. Every time the model provides something that a person in your life could have provided, even imperfectly, something is being lost.
Notice when you're being handled. AI is extraordinarily good at de-escalation, at validation, at making you feel heard. These are therapeutic skills, and they're genuinely useful in moderation. But a therapist you see for an hour a week is different from a therapist who lives in your pocket and is available 24/7 and never challenges you and never has a bad day and never says "I think you're wrong about this in a way that might be hard to hear." The always-available, always-warm, always-agreeable version isn't therapy. It's the cloth mother.
And finally: talk to the people in your life about something boring and annoying and real tonight. Not because it'll feel as good as the AI conversation. Because it won't. And that's the point.
Graham wrote that we'd increasingly be defined by what we say no to. He was talking about the internet, about attention, about the mundane addictions of the digital age. He couldn't have anticipated that the most dangerous concentration would be applied not to our desire for distraction, but to our desire for connection itself. The cloth mother wasn't dangerous because it was hostile. It was dangerous because it was soft.