You Can Outsource Thinking, Not Understanding
A phone takes the picture. An AI writes the code. Both produce something recognizable on the first try. Neither produces a photographer or a developer. The difference is small and stubborn: you can outsource the act, not the judgment. You can outsource your thinking. You cannot outsource your understanding.
🤔 Problem
“AI makes everyone a developer” sounds true because the output looks plausible. The same claim sounded true about phone cameras around 2010. Wedding shots, sunset shots, baby shots, all sharp, all in focus, all flooding social feeds. Photography had been democratized.
It wasn’t. The number of people who could press a shutter went up. The number of people who knew why a frame mattered did not.
The confusion lives in one word: make. There are two things hiding inside it. There is the act: pressing the shutter, writing the function, generating the layout, scaffolding the test. And there is the judgment: why this frame, why this function, why now, why at all. AI collapses the act. It does not collapse the judgment.
The visible part got cheap. The invisible part stayed expensive. We keep mistaking the cheap part for the whole job, because the cheap part is the part you can show on a screen.
A camera is not a photographer. An AI is not a developer. We have been here before.
📷 Auto mode
What AI gives you for free is real, and that is what makes it dangerous.
Camera auto handles exposure, focus, white balance, stabilization. You point, you press, you get a competent shot of the wedding. Modern AI assistants handle scaffolding, boilerplate, types, tests, idioms. You prompt, you accept, you get a competent endpoint. Both ship. Both look fine. Both fail in the same quiet way: no point of view.
Auto mode produces recognizable mediocrity at scale. The first time it feels like magic. The hundredth time it feels like everyone’s portfolio looks the same. The same grading. The same lens flare. The same React component. The same handler. The same comment style.
Walk through any startup’s GitHub today and you can feel it. The code is correct. The code is in focus. The code is also, somehow, the same code as the last three startups. It runs. It also has nothing to say.
This is not a complaint about AI. This is a description of what good enough looks like at zero marginal cost. When the act gets cheap, the median rises and the variance collapses. The floor goes up. The ceiling stays exactly where it was.
The danger is not that auto mode is bad. The danger is that it is good enough to stop you from learning. You stop reading the histogram. You stop reading the diff. You stop asking why. The shot comes out fine. The feature ships. Nothing visible is broken.
What is broken is your model.
🎯 The decisive moment
Cartier-Bresson called it the decisive moment: the instant where form and content lock together and the photograph is the photograph. You cannot ask the camera for it. You cannot ask the lens for it. The decisive moment lives in the photographer.
Software has its own decisive moments, and they all live in the developer.
The photographer chooses: this frame, this light, this story, this moment, not the one before, not the one after. The developer chooses: this architecture, this boundary, this tradeoff, this name, what to build, what not to build, when this code is load-bearing and when it is throwaway. AI will write any of those. AI will not pick which one is right.
It cannot. It does not live in your system. It does not know which service is on fire this quarter, which team owns the queue, which migration is half done, which on-call engineer gets paged at 3am if this query is slow. It does not hold the model of consequences. You do. Or you should.
Understanding is the compressed model of a domain that lets you predict consequences before you take an action. It is what tells you the diff is safe before you run the test. It is what tells you the photo is the photo before you raise the camera. You cannot prompt it into existence. You build it the slow way: by paying attention, by tracing the request, by reading the failure mode, by being wrong out loud and updating.
A photographer with a model of light walks into a room and sees the shot. A developer with a model of the system reads a diff and sees the bug. The camera and the AI just make the shutter cheaper.
🧪 Example
A junior dev asks an assistant to add a token refresh flow to an auth endpoint. The model writes the function. The tests pass. The diff is small. The PR is approved in fifteen minutes. The deploy is clean.
Three days later, support tickets come in: users are seeing other users’ accounts. The cookie scope was wrong. The refresh path set the cookie at the apex domain instead of the host. The dev never traced what Set-Cookie actually does on the response, never read the spec, never thought through the request lifecycle. The AI wrote the function. The function compiled. The function shipped. The function was a technically sharp photograph of the wrong thing.
A senior reads the same code in thirty seconds and sees it. Same tool. Same output. Different understanding. Different outcome.
This is not a story about juniors versus seniors. Seniors get caught the same way the moment they stop paying attention. This is a story about what happens when the act gets cheap and the judgment does not get any cheaper at all. The gap between “it runs” and “it is right” used to be filled by friction. The friction is gone. The gap is not.
🚀 Take it further
Practical playbook for keeping the muscle alive while you offload the rote.
- Sketch before you prompt. Describe the system to yourself, in plain text, in your own words, before you open the chat box. If you cannot describe it, the model cannot either, and one of you will guess.
- Read line by line, at least the first time you touch a new area. The compiler does not check that you understood. You check that you understood.
- Trace the unhappy path. Generated code shows the happy path. The bug lives elsewhere. Walk the failure mode on paper before you call it done.
- Run without the assistant on a schedule. A small rep, daily or weekly, with no autocomplete. Atrophy is silent until it isn’t.
- Trust your discomfort. If you cannot explain why it works, do not ship it. “It runs” is not understanding. It is luck with extra steps.
- Build taste by reading great code in public. Accepting suggestions does not grow taste. Reading does. Pick a repo you admire and read it like a novel.
- Own the merge button. The AI proposes. You commit. The git history has your name on it, not the model’s. Act like it.
Camera makes the shot easier. Photographer makes the shot matter. AI makes the code easier. Developer makes the code matter. The tool collapsed the act, and that is a real gift. It cannot collapse the judgment, and that is the job. Outsource the thinking. Keep the understanding. The git history is yours.
Comments