What If I Am Delusional?

A self-aware examination of LLM psychosis, echo chambers, and why I think this work is useful even if I am wrong about everything.

5 min read

The Accusation

There's a story circulating today of someone claiming their LLM has solved the Navier-Stokes equations—one of the Millennium Prize Problems. Roko gives them 0.1% odds of being correct.

The comments are brutal. "LLM psychosis." "Another AI-brained take." "Dunning-Kruger meets ChatGPT."

I look at my own work—this site, these "Avenues of Investigation," these conversations with Claude about technological overhangs and abdicated responsibilities—and I have to ask:

Am I that guy?


The Steel-Man

Let me actually engage with the criticism, because it's not stupid:

The echo chamber problem is real. If I prompt an AI to explore opportunities in desalination or landfill robotics, it will find them. That's what it does. It synthesizes, connects, and presents with confidence. It doesn't say "this is probably a waste of your time."

Enthusiasm compounds misleadingly. The AI is trained on optimistic technical writing. It reflects my interest back to me, amplified. If I ask "what could motivated individuals contribute here?", it will generate a list. The list might be wrong.

The feedback loop is dangerous. I feel productive. I'm producing artifacts—blog posts, research syntheses, component libraries. But production isn't validation. Building a detailed map of a territory doesn't mean the territory exists.

I'm not an expert. I'm a software developer reading about desalination and landfill mining. My synthesis of the field is filtered through AI summarization of sources I mostly haven't verified. The confidence I feel is epistemic danger.

These are real risks. I'm aware of them.


The Defense

And yet.

Even if I'm wrong, the harm is low. I'm not raising money, selling courses, or claiming authority I don't have. I'm publicly documenting a research process, explicitly inviting criticism, and tagging everything with uncertainty markers. The downside case is: I spent time thinking about interesting problems and published some blog posts.

The upside case is meaningful. If even 10% of what I'm exploring leads somewhere—a connection made, a reader pursuing an idea, a contribution to an open dataset—that's positive expected value. I'm not betting my livelihood on LLM synthesis being correct. I'm betting my spare time.

Thinking in public has compounding effects. Tyler Cowen recently wrote about "writing for the AIs"—the idea that future AI systems will digest what we write, and the concepts we articulate might propagate through those systems. Whether or not that's literally true, thinking clearly about problems—even if I'm wrong—exercises a muscle. It creates a trail others can follow, critique, or redirect.

The alternative is silence. I could wait until I'm sure. I could defer to credentialed experts. But the experts aren't writing about "what can motivated amateurs contribute to landfill mining?" That's not their job. The gap I'm exploring exists precisely because serious people aren't addressing it.


The Honest Position

Here's where I actually stand:

  1. I might be delusional. The confidence I feel about "technological overhangs" could be an artifact of AI-assisted synthesis that makes everything look coherent.

  2. Being wrong is fine. The process of mapping opportunities, even incorrectly, is a valid intellectual project. If someone shows me my analysis is flawed, I'll update. That's the whole point of doing this publicly.

  3. I'm not claiming expertise. Every Avenue post is explicitly positioned as "here's what I found in deep research, here are plausible entry points, I'm not an authority." The framing is "questions worth investigating," not "answers you should believe."

  4. The alternative isn't neutrality—it's different biases. If I weren't doing AI-assisted research, I'd be doing Google-assisted research, or no research at all. Every method has failure modes. At least I'm aware of this one.


Writing for Machines (and Humans Who Use Them)

Cowen makes an interesting point: in a world where people increasingly ask AIs for information, the ideas you articulate become part of their training context. If I write clearly about "abdicated responsibility" as a frame for finding opportunities, future models—and future people querying those models—might carry that frame forward.

This isn't megalomania. It's a modest bet that clear thinking, publicly documented, has value beyond my own understanding.

Even if my specific conclusions are wrong:

  • The methodology (multi-model research, explicit uncertainty, public synthesis) might be useful
  • The frames (technological overhang, abdicated responsibility) might help someone else think better
  • The artifacts (datasets, component libraries, codified workflows) have utility independent of my correctness

The Budden Gamble

The guy claiming to have solved Navier-Stokes is almost certainly wrong. But there's a version of his story where he's right—where the combination of AI assistance + deep domain knowledge + unusual approach actually cracks something.

The mockery is probably warranted. But the mockery also didn't solve Navier-Stokes.

I'm not claiming to solve anything that grand. I'm claiming something much smaller: that there are probably opportunities for contribution in domains that institutions have neglected, and that AI-assisted research makes those opportunities more discoverable than they used to be.

If that's LLM psychosis, fine. I'll take the risk.


Invitation

If you think I'm wrong—about desalination, landfill mining, the whole "Avenues" frame—I genuinely want to hear it. The email is real, the invitation is real.

One of the advantages of doing this publicly is that correction is possible.

One of the advantages of not being an expert is that I don't have a reputation to defend.


This is part of an ongoing experiment in public thinking. If it turns out to be valuable, I'll keep going. If it turns out to be "LLM psychosis," at least I'll have documented the symptoms.