A Working Theory
On technological overhangs, the gap between the possible and the actual, and why I am mapping the large problems and occasionally building for the small ones.
7 min read
Two wards, two responses
In 1799, Humphry Davy discovered that nitrous oxide destroyed physical pain. He published his findings. For the next forty-five years, it was used primarily as a party trick for Victorian aristocrats — "laughing gas" at social gatherings — while surgeons operated on screaming patients in the next room.
It wasn't until 1844 that a dentist finally thought to use it for surgery. Forty-five years. Party tricks while patients screamed.
Contrast this with insulin. Frederick Banting and Charles Best isolated it in Toronto in 1921. By January 1922, they were injecting dying children. By December, mass production had begun. Lab to bedside in months.
Same type of discovery. Radically different response.
The difference wasn't technology
The difference was recognition.
This is technological overhang: the gap between what is technically possible and what we are actually doing with it. The nitrous situation wasn't a knowledge problem — Davy published. It was a recognition problem. Nobody connected "this eliminates pain" to "surgery involves pain."
We are living through the largest overhang in history. AI capabilities have advanced faster than our collective ability to recognise what they mean. The tools are public. The papers are on arXiv. The capabilities are subscription-priced. Yet most of the world is still treating these tools like nitrous at a party — generating fun images and chatbots — while the surgery remains unchanged.
The prevailing narrative is "bubble/slop/doesn't work." Meanwhile, Terry Tao is blogging about vibe-proving Erdos problems and Andrej Karpathy — one of the most technically sophisticated people in AI — is tweeting that he feels like he's missing a 10x capability boost due to "skill issue."
If Karpathy feels behind, the overhang is real.
What this site is for
My thesis is simple: we are surrounded by problems that are stuck not because they are technically hard, but because nobody is paying attention.
This isn't Effective Altruism — the question isn't "given limited resources, how do we maximise good?" It's something different: given that AI has collapsed the cost of both building AND investigation, what problems are now tractable for people who previously couldn't contribute?
Two costs have collapsed. The cost of code — I can build something in an afternoon that would have taken a team months. And the cost of research — I can investigate a problem domain in a day that would have taken weeks of specialist time. When execution is cheap, the bottleneck shifts to agency: deciding what's worth doing.
The insulin story isn't about choosing the right intervention. It's about recognising that intervention is suddenly possible at all. The frame isn't prioritisation within constraints — it's expanding who gets to help.
This site exists to find those problems and explore what the new leverage might do about them. Some of what I find will be tractable for me. Some won't — but the investigation itself produces an artifact that might help someone better positioned.
Reconnaissance
The primary work is mapping — understanding why capable technology isn't being deployed, where the bottlenecks are, who's already working on it. I call these investigations Avenues. They're cartographic: the goal is to make a problem space legible, not to promise a solution.
Example: Desalination. We have efficient membranes and cheap solar power. Yet small-scale projects fail because of payment models and supply chains, not chemistry. I map these failure modes to find where a small lever might move a heavy object — where a better maintenance model or payment interface could mean clean water for a village that currently has none.
Different avenues have different bottlenecks. Some are software-tractable — a tool could help. Some need distribution — getting existing solutions to people who need them. Some need policy change or capital I don't have. Part of the investigation is figuring out which is which, and whether there's anything I can personally do about it.
Avenues don't have to be civilisational. Some are: "Here's a problem nobody's solving because there's no money in it, here's what exists, here's where different kinds of people might actually help." And some investigations end with "this is intractable for me at my scale" — which is still valuable. The map exists. Someone else might use it.
Intervention
The secondary work — when opportunities emerge — is building something small. A tool that helps twenty people in a care home. An interface that helps someone navigate bureaucracy. Elderly people in my town who are underserved by existing systems. These are the "nobody can be bothered" problems: the technology is trivial, the need is real, but there's no prestige and no profit, so nobody does it.
This is where the thesis gets tested, not just stated. Mapping civilisational problems is valuable, but it can also be a way to feel productive while avoiding action. Blogging about what others ought to do — without doing anything yourself — is hollow. And there's a real risk that AI-assisted research makes everything look more tractable than it is — that I'm pattern-matching on coherence rather than truth. I've tried to steelman that criticism.
Fixing things is the best way to complain. A care home activity tool that helps twenty elderly people is more real contribution than mapping desalination if the mapping never leads anywhere. The small thing tests whether I can actually ship, find users, iterate — or whether I'm just a more sophisticated commentator.
I might be wrong about which problems are tractable. But I'd rather be wrong while trying than right while watching.
The ward is filling up
Two billion people lack reliable access to safe drinking water. The technology to fix this exists and is getting cheaper. The gap is institutional, not chemical.
That's one example. There are others. Problems that look intractable until you look closer and realise: the pieces are there, nobody's connecting them.
I'm aware that sitting around thinking about meaning and how to contribute is itself a privilege. It's a sign of being far up Maslow's hierarchy — the luxury of choosing what problems to work on rather than scrambling to meet basic needs.
For me, that privilege comes with a question I can't ignore. Part of what I want to do is help others move up the same pyramid. Clean water means a village can stop worrying about cholera and start thinking about education. Faster diagnosis means a family can stop their diagnostic odyssey and start living. The goal isn't just to solve problems — it's to free people from problems so they can think about what they want to do with their lives.
The question isn't whether AI will change things. It's whether the change will happen like nitrous oxide — decades of delay while the capability exists — or like insulin — recognition followed immediately by action.
I don't know if what I'm doing will matter. But I'd rather try than watch.
If you think I've got something wrong, I'd like to know. See also: What If I'm Delusional?