Skip to main content

Wands From the Sky

The cost of intelligence collapsed. Everyone got a wand. Where you point it is the only interesting question left.

8 min read

It's like these magical wands have been falling out of the sky and not everybody has eyes to see that they're there, but naturally if you've got them, you're gonna pick them up and start to use them. There's this interesting question here about where do you direct it? The whole sense of, if the cost of intelligence has collapsed, what matters now is agency. And of course different people will direct that in different directions, as of course they should.

I wouldn't name him by name in the blog post, but my cousin, for example, is just desperately keen to be a successful entrepreneur. His father wasn't a successful entrepreneur and he's always chased success and never had it. Honestly, I think his approach is probably a bit misguided because he's like "wow, I can suddenly build a SaaS in half an hour" — failing to recognise that that's true for lots of other people as well. I don't mean to knock him. I think it's great that he's trying to build a successful business.

Anyway, the point is I want to mention in this brief blog post that what I'm trying to do, the sort of what I've landed upon — if you think of Maslow's hierarchy of needs, I'm sufficiently far up that hierarchy that I'm thinking about purpose and direction and meaning for my life. Which is to say I'm extremely privileged. Not worrying about feeding myself, particularly, although I can make an aside there about the danger of AI impact on coding as a career. But less flippantly, I'm not living in a war zone or struggling to find somewhere safe to sleep.

And you know, people talk about AI alignment, but the hidden assumption there is: aligned with what? What is the thing upon which we can all agree? And I'm really not sure what the best answer to that is, but a reasonable metric, a reasonable heuristic to begin with, is to help other people climb up Maslow's hierarchy until they get to the point where they can be spending their time pontificating about what they'd like to be doing and what their purpose is. That seems like a reasonable short-term heuristic.

And that seems pretty selfless and laudable, and I'm proud of myself for trying to take that approach. And that's what will be the purpose of this site and these endeavours. Obviously I'd also like to be making money from it. But even if I don't, if I just do some open source project that ends up helping lots of people, I'd be extremely pleased with that.

I think it's worth an aside here about how people judge the rich and famous for their debauched lifestyles. But the truth is that most people that haven't done that sort of thing — it's not that they're just much more pure or worldly and good, but rather that they simply never had the opportunity. There's that thing about power corrupting, but it's also power giving you the opportunity that you wouldn't have had before. There are obvious parallels here to what AI enables and interesting questions thrown up by it. Like, if you're given this genie in a bottle — what are you gonna do with it? What does that say about you? What different divisions are gonna surface as different people fall into different broad groups around how they respond to this?

There's a previous blog post on the site here about picking up the stone.


The Wands

Wands are falling out of the sky. Most people haven't noticed. The ones who have are busy arguing about what kind of wand it is, or whether wands are safe, or whether wand-users will replace non-wand-users in the job market.

Almost nobody is asking the interesting question: where do you point it?

The cost of intelligence has collapsed. Not completely — not yet — but enough that a single person with a laptop can now do research, analysis, and synthesis that would have required a team and a budget eighteen months ago. The bottleneck is no longer capability. It's direction.

And direction, it turns out, is revealing. People say they'd never behave like the debauched billionaire, but most of them have simply never had the opportunity. Power doesn't corrupt so much as it exposes what was always there. AI isn't giving everyone wealth, but it is giving everyone access to a kind of leverage that used to require teams, budgets, and credentials. For the first time, a very large number of ordinary people have something that functions like power — and what they do with it is starting to diverge.

The Sorting (Continued)

I wrote previously about picking up the stone — the moment when a new capability appears and people sort themselves by response. Rejection, absorption, despondency, adaptation. That sorting is accelerating.

But within the "adaptation" category, a second sorting is happening. Among the people who've picked up the wand, the question is no longer whether to use it. It's what for.

Someone I know — a relative — picked up the wand and immediately thought: I can build a SaaS in thirty minutes. He's been chasing entrepreneurial success his whole life, following a father who chased it too. I don't knock him for it — I think it's great that he's trying. But his excitement is about the capability itself, not about where to point it. He's dazzled by the wand. The question of direction hasn't arrived yet.

I think it will, for him and for a lot of people. Once the novelty of building things fast wears off — once everyone can build things fast — the question that remains is: what was worth building?

The Privilege of Direction

Here's something I need to say plainly: I am writing this from a position of extraordinary privilege.

I have a job. I have a roof. My daughter is healthy. I'm not fleeing anything. The fact that I get to sit here and think about purpose and direction and what to do with AI is itself a luxury. Maslow would put me near the top of the pyramid — self-actualisation territory, where the questions are about meaning rather than survival.

I'm not going to pretend otherwise, because pretending otherwise would undermine the argument I'm about to make.

That career is under real threat — I'm a software developer, and I haven't written code in months because AI writes it better and faster. But even that anxiety is a privileged anxiety. I'm worried about career obsolescence, not about clean water.

Aligned With What?

There's a conversation happening in AI research about alignment — making sure AI systems do what humans want. The question is treated as technical: how do you specify objectives, how do you prevent reward hacking, how do you maintain human oversight.

But there's a prior question that rarely gets asked: aligned with what? What is the thing we want AI aligned to? What do humans actually want?

I don't have a universal answer. I doubt one exists. But I have a heuristic — a reasonable one to begin with:

Help people climb the hierarchy.

Maslow's hierarchy of needs, reimagined for the age of collapsed intelligence costs

Two billion people lack reliable access to clean water. Four hundred million have rare diseases that take nearly five years to diagnose despite existing diagnostic tools. Four hundred and thirty million need hearing rehabilitation that already exists and is already affordable.

These aren't technology problems. The technology works. They're deployment problems, institutional problems, recognition problems — the overhang this site is built to map.

If collapsed intelligence costs let me investigate those problems more deeply, map them more clearly, and make them more visible — that seems like a defensible use of the wand. Not the only defensible use. But a defensible one.

The heuristic: use the new capability to help other people reach the point where they get to worry about purpose and meaning, instead of survival. Push the floor up. Make the privilege of direction less exclusive.

Where This Site Fits

This site is my answer to the question. Not a complete answer — I'm still working it out. But the direction is clear enough to state:

Map the places where technology works but deployment doesn't. Investigate why. Make the stuck points visible so that someone — maybe me, maybe someone else — can act on them. Use AI as a research collaborator to do this faster and deeper than one person could alone.

I'd like this to become sustainable. But if it never earns a penny and just ends up helping people, I'd be extremely pleased with that. The two aren't incompatible, but if I had to choose, I'd choose useful over profitable.

That's the wand pointed. We'll see where it lands.

Discuss this with