████████ ███████ ██████ ██████ ██████ ██ ██ ██ ██████
██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██
███████ █████ ██████ ██ ██ █████ ██ ██ ██ ██ ██
██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██
████████ ███████ ██ ██████ ██ █ ██ ███ ██████
There are two kinds of programmers right now: those who type code into an AI and those who type requirements into an AI. I've done both, seriously, for months, and one of them is dramatically better at producing software that doesn't fall over in week three.
This is a long post about what changed for me, what a "spec" actually is in this workflow, and the small habits that made it stick.
My old flow was the one most people are still using: open the editor, open a chat with an AI, describe what I want, paste the reply, run it, complain, iterate. It felt fast. The first version of anything was always up in 20 minutes.
The problem was always week two. Week one I'd build five features in a flurry. Week two I'd try to add a sixth and realize the model didn't know anything about the first five — they weren't in its head anymore, they weren't in a file that mattered, they were scattered across chat logs. I'd be back to onboarding the machine every morning.
⚠ anti-pattern: treating your chat history as the source of truth for how your program works.
Now I write a spec.md before I write any code. It lives in the repo. It's the first file the AI reads in every session. It contains, in order:
It's short. It's maybe 200 lines by the end. It gets edited constantly. The spec is the program; the code is just one implementation of it.
# spec.md — budget-oled purpose: "show my current weekly burn on a small OLED" users: "just me, standing at my desk" data: Transaction { id, ts, amount_cents, category, account } Week { start, end, budget_cents, spent_cents } behaviors: 1. poll transactions every 15 min 2. sum spent_cents for current week 3. display "$123 / $400" on line 1 4. display progress bar on line 2 5. blink if over budget out_of_scope: - categorization UI (runs on phone) - historical charts - anything requiring wifi setup on device
An AI is great at producing the next 200 lines of code given what it knows. It is bad at remembering what it wrote last Tuesday. A spec is a contract that survives context resets. When I start a new session, I say "read spec.md, propose a change to behavior 3" and we are immediately in the same conversation we ended with yesterday.
This also turns out to be how I work best with myself. The spec is a rubber duck that remembers.
The crucial step everybody skips is the last one. You just learned something. Put it in the spec. Not in the commit message. Not in your head. In the document the machine reads first every morning.
// TODO decide: retry on 5xx or dead-letter? gives the model a real handle to pull on.I haven't done a formal study; take the number with a grain of salt. What I can say is that projects are making it to week three. Last year most of them didn't.
When I hand a good spec to an AI and ask "build this," the reply is boring. The reply is "okay, I'm going to add three files and modify one. Here they are." No questions. No clarifications needed. That's the feeling you're aiming for: the spec was clear enough that the machine had nothing to ask.
When that happens, you've done the real work of programming. The rest is typing.