There’s a loop I keep falling into with AI coding tools: hit a bug, paste the code, get a confident fix, apply it, move on. Five minutes. Feels efficient. Then the next day the bug comes back, and I realize nobody — me or the AI — ever understood it.
A React state bug caught me in this loop recently. State updating wrong, UI rendering all over the place. Claude Code gave me a confident explanation and a fix. Applied it, ran it, worked. Closed the tab.
Next day, bug’s back. This time I turned the AI off, traced every state transition by hand, re-read the useEffect cleanup. Took almost an hour. The real cause was a wrong assumption about timing — something Claude Code had no way of knowing, because it doesn’t actually understand the codebase. It answers the question you asked. And I’d asked the wrong one.
That’s the part I keep missing. The AI didn’t give a wrong answer. It gave the right answer to a question I hadn’t thought through.
What slowly slips away
Spend enough time with AI coding tools and something starts to shift. I know more. Whether I understand more, I’m less sure.
The answers aren’t just fast — they’re plausible enough that you stop pushing back, a little at a time. Your brain picks up a new default: see a problem, want an answer, skip the part where you sit with it. You stop being patient with the ambiguity and the hard bits. What worries me isn’t the hours of research I save. It’s that the muscle for building an argument from scratch quietly weakens, and you don’t notice until you need it.
I pair-programmed with a junior dev a while back. We hit a query optimization thing and their hand went straight to Cursor. I said, “Hold on. Turn it off. What would you do without AI?” They sat there for five minutes and then: “I don’t know where to start.” The problem wasn’t hard. But “ask AI first, think later” had already become reflex. That’s the habit a lot of new devs are forming right now.
Why it can’t think for you
There are a few technical reasons the “let the AI figure it out” approach breaks down.
LLM APIs are stateless. Each request, the model only sees what’s in that request. Want it to remember the earlier conversation? You resend the whole thing. Anything missing, it guesses, and guesses compound.
Context windows are finite. Once a problem gets complex enough to overflow the window, the model silently drops things and reasons from whatever’s left. Plenty of research shows longer contexts actually degrade reasoning quality.
And there’s information poisoning. Same problem, five different solutions floating in the context — now the model has to pick, and every extra reasoning step is another chance to drift.
The short version: if you haven’t thought the problem through, there’s no way for the AI to think it through for you.
First Brain and Second Brain
People talk a lot about building a “Second Brain” with AI — memory, knowledge graphs, prompt libraries, rule engines. The tooling really is powerful. I use some of it.
But it only helps if your First Brain still works — sharp enough to verify, to doubt, to catch contradictions, to make the final call. I’ve watched people pour everything into a Second Brain while their First Brain quietly atrophies. What you get isn’t competence. Just confusion, faster.
Early on, using AI coding tools felt like having a fast classmate next to me. Reads quick, writes quick, remembers a lot. Thirty seconds, ten proposals. But he doesn’t know my codebase, doesn’t know the project, and if production goes down at 2am it’s not his problem. He’s sharp. The question was always whether I was sharp enough to check his work. The days I was tired and handed everything off without checking were the same days bugs made it through.
How I use it now
Opening AI is the last step, not the first. I sit with the problem for at least ten minutes before touching the tool, even on tired days. Those ten minutes let me pinpoint what I actually don’t know, instead of a fuzzy “I don’t know anything,” and the prompt that follows is noticeably better.
I ask “why,” not just “how.” It suggests a solution, I push: why this, what else did you consider, what’s the trade-off? If it can’t explain convincingly, it doesn’t get merged.
I read every line of generated code. This is the easiest step to skip, and probably where most of the avoidable production bugs come from. Code that looks fine on a quick skim has a way of turning up later as the root cause.
And one no-AI session a week. Copilot off, Claude Code off, editor and docs. First time is uncomfortable. After a few weeks, though, debugging from first principles gets quicker and my own reasoning feels sharper. Think of it as maintenance for the part of the system that matters most.
Comments