Skip to main content

Working Theory

The gap between what technology can do and what it actually does. A working theory about technological overhangs and why they persist.

7 min read

The observation

In 1799, Humphry Davy discovered that nitrous oxide destroyed physical pain. He published his findings. For the next forty-five years, it was used primarily as a party trick for Victorian aristocrats — "laughing gas" at social gatherings — while surgeons operated on screaming patients in the next room.

It wasn't until 1844 that a dentist finally thought to use it for surgery. Forty-five years. Party tricks while patients screamed.

Contrast this with insulin. Frederick Banting and Charles Best isolated it in Toronto in 1921. By January 1922, they were injecting dying children. By December, mass production had begun. Lab to bedside in months.

Same type of discovery. Radically different response. The difference wasn't technology — it was recognition.

This is technological overhang: the gap between what is technically possible and what we are actually doing with it. The nitrous situation wasn't a knowledge problem — Davy published. It was a recognition problem. Nobody connected "this eliminates pain" to "surgery involves pain."

The pattern repeats across history. Capability precedes deployment by decades, sometimes centuries. And the gap is almost never about the technology itself.


The thesis

We are living through the largest overhang in history. AI capabilities have advanced faster than our collective ability to recognise what they mean. The tools are public. The papers are on arXiv. The capabilities are subscription-priced. Yet most of the world is still treating these tools like nitrous at a party — generating fun images and chatbots — while the surgery remains unchanged.

But the overhang isn't just about AI. It's about a broader pattern: technology isn't the bottleneck anymore. The gap between what's possible and what's deployed is institutional, economic, and social. We have the membranes to desalinate seawater. We have the databases to diagnose rare diseases. We have the soil maps to double crop yields. These things exist. They're published. They work. The problems that persist are stuck for reasons that have nothing to do with capability and everything to do with how capability reaches the people who need it.

What AI has done is collapse two costs simultaneously. The cost of code — I can build something in an afternoon that would have taken a team months. And the cost of research — I can investigate a problem domain in a day that would have taken weeks of specialist time. When both execution and investigation are cheap, the bottleneck shifts to something else entirely: deciding what's worth doing.

The insulin story isn't about choosing the right intervention. It's about recognising that intervention is suddenly possible at all. The frame isn't prioritisation within constraints — it's expanding who gets to help.


What "Avenues of Investigation" are

This site exists to find the places where the overhang is real and map them.

I call these investigations Avenues. Each one takes a specific domain and asks a set of questions: What technology exists? Where is it deployed and where isn't it? What's blocking deployment? Is the bottleneck something a small actor could move, or does it require capital and policy I don't have?

So far: desalination, landfill robotics, rare disease diagnosis, hearing aids, and soil. More are in progress.

The goal is cartographic. I'm making maps, not promises. The output of an avenue is legibility — making a problem space clear enough that someone (maybe me, maybe not) can see where to push. "Where the overhang exists and why it persists" is the question. Sometimes the answer reveals a lever. Sometimes it reveals that the lever is out of reach. Both are useful.

Avenues don't have to be civilisational. Some end with "here's a problem nobody's solving because there's no money in it, here's what exists, here's where different kinds of people might actually help." Others end with "this is intractable at my scale" — which is still valuable. The map exists. Someone else might use it.


The question

These investigations share a shape. Technology exists. It works. It's published. And the reason it doesn't reach the people who need it is never "we need a better algorithm." It's economics, maintenance, integration, perception, trust.

The prevailing narrative — that we need more research, more breakthroughs, more innovation — misses the point. We don't have a shortage of capability. We have a shortage of deployment. The problems that persist are coordination failures, not knowledge failures.

That shifts the question from "what can we build?" to something harder: given that the tools exist, who has the agency to connect them, and do they think it's worth doing?

I keep coming back to something: if these things are easy to do, it's a question of having the agency and thinking this is the thing worth doing. The overhang persists not because the problems are hard but because the problems are unsexy — they involve waste management, village water kiosks, data plumbing, soil chemistry. There's no prestige and no venture-scale return. So nobody does them.

I might be wrong about which problems are tractable. But I'd rather be wrong while trying than right while watching.


Who I am

I'm a software builder, not a domain expert. I don't know membrane chemistry, rare disease genetics, or soil science. What I do is read papers, talk to systems, find the structural reasons why capability isn't reaching deployment, and write what I find.

The value — if there is any — is cartography. Making problem spaces legible. Surfacing the specific points where a small lever might move a heavy object, and being honest about when the lever is out of reach.

The secondary work is building. When an avenue reveals something tractable — a tool that could help twenty people in a care home, an interface that helps someone navigate bureaucracy — I try to build it. Mapping civilisational problems is valuable, but it can also be a way to feel productive while avoiding action. Fixing things is the best way to complain.

I'm aware that sitting around thinking about meaning and how to contribute is itself a privilege. It's a sign of being far up Maslow's hierarchy — the luxury of choosing what problems to work on. For me, that privilege comes with a question I can't ignore. The point isn't just to solve problems — it's to free people from problems so they can think about what they want to do with their lives.

The question isn't whether AI will change things. It's whether the change will happen like nitrous oxide — decades of delay while the capability exists — or like insulin — recognition followed immediately by action.


The personal version of this argument — where the wand metaphor comes from — is Wands From the Sky. If you think I've got something wrong, I'd like to know. See also: What If I'm Delusional?

Discuss this with