What's Still Worth Knowing

AI writes code now. But some human knowledge doesn't get replaced—it gets more valuable. Here's what to learn if you want to be good at AI-assisted development, not just present for it.

9 min read

There was a brief period in chess—roughly 2005 to 2015—when human-computer teams beat the best computers playing alone. "Centaur chess," they called it. The humans provided strategic intuition; the computers calculated. Together, they were better than either.

Then computers got good enough that the humans became overhead. The centaur era ended.

We're in the centaur era for coding. AI writes most of the code in some workflows. But humans still contribute value—and some human knowledge is becoming more valuable, not less.

This post is about that knowledge. Not "why you should still learn to code" (defensive, probably wrong long-term). Instead: what to know if you want to be good at AI-assisted development, not just present for it.


1. Machine-Readable Output Flags

Most command-line tools have two modes: human-friendly output (pretty, readable) and machine-friendly output (structured, parseable). AI assistants work dramatically better with the machine-friendly version.

Git is the clearest example.

The normal output of git status:

1$ git status
2On branch main
3Your branch is ahead of 'origin/main' by 2 commits.
4(use "git push" to publish your local commits)
5
6Changes not staged for commit: (use "git add <file>..." to update what will be
7committed) (use "git restore <file>..." to discard changes in working directory)
8modified: src/index.ts modified: package.json
9
10Untracked files: (use "git add <file>..." to include in what will be committed)
11src/new-feature.ts
12
13no changes added to commit

This is nice for humans. But it's verbose, and the format changes based on state. An AI parsing this has to handle many cases.

The porcelain version:

1$ git status --porcelain
2M src/index.ts
3M package.json
4?? src/new-feature.ts

Two characters of status, then the filename. Every time. No prose. No conditional messaging. The AI can parse this reliably and act on it.

Other examples:

1# JSON output from npm
2npm list --json
3
4# Structured output from docker
5
6docker ps --format '{{.Names}}\t{{.Status}}\t{{.Ports}}'
7
8# Machine-parseable from kubectl
9
10kubectl get pods -o jsonpath='{.items[*].metadata.name}'
11
12# Git log with exact format
13
14git log --format='%H|%an|%s' -n 5

Why this matters: If you know these flags exist and include them in your prompts, you'll get more reliable results. The AI might not default to them. You knowing they exist is leverage.


2. What the AI Can't See

Your AI assistant is working with what you give it, plus what it can read from your codebase. But there's a lot it can't see:

Runtime state. What's actually running right now? Which processes? What ports are in use? The AI sees your code, not your system.

1# What's actually listening on port 3000?
2lsof -i :3000
3
4# What processes are running?
5
6ps aux | grep node
7
8# What environment variables are set?
9
10printenv | grep -i database

Deployed vs. local. Your repo might be three commits ahead of production. The bug report is about production. The AI is reading your local code. You need to tell it what's deployed.

Network and infrastructure. What can actually talk to what? What are the real DNS entries? What's the actual database schema in production (not the migration files)?

Historical context. Why was this weird workaround added? The AI can see the code but not the Slack thread from two years ago explaining the legacy system it's interfacing with.

The skill: Knowing when the AI is missing context and supplying it. Not just "fix this bug" but "fix this bug, and note that in production we're still on v2.3 of the API, not v3 like the local code shows."


3. The Smell Test

AI will confidently generate solutions that "work" in narrow senses while being wrong in ways that require experience to detect.

Over-engineering. Ask for a simple feature, get an abstract factory pattern with three levels of indirection. The code runs. The tests pass. It's still wrong.

Cargo-culting. The AI has seen a lot of code. Some of it was written by people adding unnecessary complexity because they saw someone else do it. The AI perpetuates these patterns.

Subtly wrong error handling. Catches an exception, logs it, and continues. Technically handles the error. Actually masks a problem that will bite you later.

You can't fully articulate this knowledge. It's taste. It's pattern recognition built from seeing what does and doesn't work in production. The AI doesn't have it (yet). You might.

The skill: Looking at AI-generated code and feeling "this is more complicated than it needs to be" or "this error handling smells wrong"—and trusting that instinct.


4. Verification Instincts

Knowing what to check is different from knowing how to implement. The AI can implement. But what should you verify before shipping?

Questions worth asking:

  • What happens when this input is empty? Null? Unexpectedly large?
  • What's the failure mode if the database is slow? Unavailable?
  • What happens to users who had state in the old version?
  • Is this change backwards compatible with the API clients we don't control?
  • What would an attacker try here?

The AI might not volunteer these. It solved the problem you asked about. It didn't necessarily think about edge cases you didn't mention.

The skill: Having a mental checklist of "things that break in production" and running through it. The AI implements; you verify.


5. Knowing When To Stop

Left to its own devices, an AI assistant will:

  • Add error handling you didn't ask for
  • Refactor adjacent code that "could be improved"
  • Suggest additional features
  • Create abstractions for single-use cases
  • Add types, tests, and documentation to code you're still experimenting with

These aren't bad impulses. But they're not free. Every addition is code you maintain. Every abstraction is complexity you carry.

The skill: Saying "stop, this is enough." Recognising when you're in exploration mode vs. production mode. Knowing that three lines of duplicated code is often better than a premature abstraction.


6. The Right Level of Trust

AI-generated code exists on a spectrum from "obviously correct" to "looks right but might be subtly wrong."

High-trust territory:

  • Boilerplate you've seen a hundred times
  • Standard patterns in well-documented frameworks
  • Code that's immediately testable

Low-trust territory:

  • Anything involving dates, times, or timezones
  • Concurrent or async edge cases
  • Security-sensitive code (auth, encryption, input validation)
  • Complex business logic with implicit constraints
  • Code that interfaces with systems the AI doesn't have documentation for

The skill: Calibrated trust. Skimming the high-trust code, scrutinising the low-trust code. Not reviewing everything the same way.


7. Context Hygiene

If you run npm run dev in your terminal, every hot reload, every webpack message, every request log—it all becomes part of your conversation context. Hundreds of lines of noise that the AI has to process, that you're paying for, that crowd out the actual work.

The fix is process managers. pm2 is the most common:

1# Start your dev server in the background
2pm2 start "npm run dev" --name myapp
3
4# Check logs only when you need them
5
6pm2 logs myapp --lines 50
7
8# Search for specific errors
9
10pm2 logs myapp | grep -i error
11
12# Restart after config changes
13
14pm2 restart myapp
15
16# Stop when done
17
18pm2 stop myapp

Why this matters for AI sessions:

  • Context efficiency. The AI pulls logs when needed, not constantly. You specify how many lines. No webpack spam in your conversation.
  • Searchable. When something breaks, grep for the error instead of scrolling through live output.
  • Non-blocking. Your terminal stays free for other commands. The AI can run builds, tests, and checks without killing the dev server.
  • Persistent. The server keeps running even if your Claude Code session resets.

This seems obvious in retrospect, but most people let their dev server spam the context window. It's free context you're throwing away.


How Long Does This Last?

The centaur era in chess lasted about a decade. Then the humans became overhead.

Some of this knowledge will stop being valuable as AI improves:

  • Machine-readable flags matter less if AI can reliably parse any output
  • "What the AI can't see" shrinks as context windows grow and tool use improves
  • Verification instincts might be supplemented by AI that stress-tests its own code

Some might become more valuable:

  • Taste and judgment are hard to train into models
  • Knowing when to stop requires understanding goals the AI doesn't have
  • Trust calibration requires understanding what AI is and isn't good at

I don't know how long this window lasts. A year? Five years? Ten?

But right now, in January 2026, this knowledge makes you measurably better at AI-assisted development. It's worth knowing.


The Practical Version

If you're vibing along with AI coding and want to level up:

  1. Learn the --porcelain / --json / --format flags for your common tools. Git, npm, docker, kubectl—whatever you use.

  2. Build a habit of stating context. "In production we're on version X." "The database schema looks like Y." "This is a quick experiment, not production code."

  3. Develop your smell test. When AI output feels "too much," trust that feeling. Ask for simpler. Say "no abstractions" or "minimal implementation."

  4. Know your verification checklist. What breaks in production? Empty inputs, slow dependencies, backwards compatibility, security holes. Run through it.

  5. Practice saying "stop." The AI wants to help more. Sometimes the help is overhead.

  6. Use a process manager. Run your dev server via pm2 or similar. Pull logs when you need them instead of letting them flood your context.

The goal isn't to compete with AI at writing code. It's to be the person who makes AI-written code actually work.

Discuss this with