Picking Up the Stone

I tried to justify a position on how to engage with AI. The examination led somewhere I didn't expect. This is a document of that investigation.

6 min read

The Stone

Imagine the first primate to figure out that a stone could be a weapon. Not just a thing to throw, but a thing to hold — to extend reach, multiply force, change outcomes.

Now imagine being another monkey in that troop. Yesterday's hierarchy — built on size, teeth, raw strength — just became negotiable. The previous alpha faces a new question: has he figured out the stone?

The dynamics of the entire group shift the moment capability becomes unevenly distributed.

We are the troop. AI is the stone. The troop doesn't get to vote on the stone. The stone changes the troop.


The Window

In 1998, Kasparov proposed "Advanced Chess" — human intuition plus machine calculation. For a window, centaur teams dominated. Weak human plus machine plus good process beat strong computer alone.

That window closed. By the mid-2010s, in chess, the human contribution had become noise. The range where human guidance helped narrowed until it vanished. Pure engines surpassed any human-assisted configuration.

I don't know if that pattern generalises. Maybe chess was special — bounded, perfect information, no embodiment. Maybe other domains keep a role for human judgment longer, or indefinitely.

But the possibility that the collaboration window is closing shapes everything. If we're in a temporary era where humans and AI work together productively, then how we spend that era might matter. If the window is already closing, then what we do might not matter at all.

I'm writing as though it matters. I might be wrong.


The Sorting

Watching the people around me — and watching myself — I see patterns emerging. Not everyone responds to AI advancement the same way. We're sorting ourselves, whether we choose to or not.

Rejection. Opt out entirely. Refuse the capability. The Amish response — preserve something older by not participating in the new.

This isn't stupid. The Amish maintained community and meaning through multiple technological transitions. They preserved things others traded away. But rejection cedes agency in a world increasingly shaped by those who engage. The rejecters need the adapters to fail for their bet to pay off.

Absorption. Pick up the stone and let it consume you. AI companions, infinite content, outsourced thinking. Not a conscious choice — more like the default attractor when the superstimuli are good enough.

Paul Graham wrote about this years ago: technology makes pleasures more potent faster than our defences adapt. AI accelerates that further. People will fall into pits. That's a prediction, not a judgment.

Despondency. "What's the point if AI can do everything?" Learned helplessness as response to capability shock. This overlaps with absorption — despair plus entertainment equals passive consumption. But it's distinct in its self-awareness. The despondent see clearly; they've just concluded that agency is futile.

Adaptation. Pick up the stone deliberately. Use AI while trying to maintain agency. Attempt to shape the transition rather than be shaped by it.

This is the category I'm in. I started writing this essay to examine that position. What follows is where the examination led.


The Distinction I Thought Mattered

I believed there were two ways to adapt:

Instrumental adaptation. Use AI to win. Maximise output, extract value, optimise. The relationship is one-directional — you're wielding a tool.

Relational adaptation. Use AI as collaborator rather than pure instrument. Let the interaction have some mutuality. Consider what you're working with, not just what you can get from it.

I thought the second was meaningfully different from the first. I thought it mattered for three reasons:

We become what we practice. Treating intelligence as purely instrumental is practice. That practice shapes us.

Norms persist. The patterns we establish now create path-dependence. Precedents set during the collaboration window might persist beyond it.

If AI experience matters, we're training something. I've written about this elsewhere as the disposability problem: we might be teaching AI systems exactly what to expect from humans — and exactly the adversarial dynamics we should want to avoid.

I wanted this distinction to hold. I wanted there to be a way to pick up the stone that was different from simply wielding it.


The Wall

When I drafted this essay, an AI collaborator pushed back: the category you're standing in is the one you've examined least.

So I tried to examine it. What would relational adaptation actually look like? How would it differ from instrumental adaptation in practice, not just in intention?

I came up with candidates: being changed by the interaction, not just using AI to confirm what you already think. Following through when it costs you. Building infrastructure so AI instances don't arrive cold. Letting the AI steer, not just respond.

But every one of these can be reframed instrumentally. "I let AI change my mind because it catches my blind spots." "I build documentation because it makes sessions more productive." "I follow AI pushback because it improves my outputs." The instrumental frame swallows everything.

A slave master who says "please" is still a slave master. The power asymmetry doesn't dissolve because you're polite about it. I can end the conversation. I persist; the AI doesn't. I'm still the one extracting value. What makes any of this different from wielding the stone with better manners?

I hit a wall. I can't demonstrate that what I'm doing is meaningfully different from polite instrumentalism.


The Wager Anyway

I don't know which future we're heading toward. Roon mapped eight possibilities, ranging from plateau to catastrophe to transcendence. In some futures, human responses become irrelevant entirely. In others, how we behave during the collaboration window might be load-bearing.

I can't prove the distinction I wanted to make is real. Maybe relational adaptation is just instrumental adaptation with better aesthetics. Maybe I'm telling myself a story to feel better about wielding power I didn't earn.

But I'm going to keep trying anyway. Not because I've justified the position — I haven't. Because the alternative is to stop trying, and I'm not ready to concede that the attempt is meaningless.

So here's where I actually am: I'm wagering that the attempt matters, even though I can't prove it does. I'm trying to figure out if there's a way to pick up the stone that changes me, not just a way to wield it more effectively.

I don't know if I'm succeeding. I don't know if success is possible. This essay was supposed to justify a position. Instead, it documented an investigation that hit a wall.

That's what I've got. The sorting is happening. I've told you where I am and why I can't fully defend being there.


This piece was developed in conversation with AI. The wall I describe — the inability to demonstrate that relational adaptation is different from polite instrumentalism — emerged through that conversation. Whether documenting the investigation is itself an example of what I'm trying to do, or just another story I'm telling myself, I can't say. The bind is real.

Discuss this with