Every platform right now is pushing the same line: adopt AI or get left behind. Small teams replacing whole departments, solo founders shipping products in a weekend, job titles nobody had heard of six months ago suddenly labeled “the next big thing.” Look past the hype at how people actually use these tools day to day and the picture is messier. A small group has genuinely reshaped their workflow. Most land somewhere between underwhelmed and quietly giving up. The gap between the marketing and what’s happening on the ground is the interesting part.
Everybody says “AI”, but nobody means the same thing
I was grabbing coffee with a founder a while back. He was pumped: “We’re integrating AI into our product, what do you think?”
I asked: “What kind of AI? LLM chat? Agentic? Automation?”
He paused, then laughed: “I mean… you know, AI. The AI thing.”
That’s where it starts. Most people lump it all into one word. LLM? AI. Chatbot? AI. Automation tool? AI. Copilot? AI. But Claude Code, ChatGPT, Cursor, Copilot are wildly different tools. Using the wrong one for a task is like grabbing a screwdriver to hammer a nail. When you can’t tell what you’re holding, you either expect too much or get frustrated and bail. Neither ends well.
What it’s good at, what it sucks at
From what I’ve seen across teams and projects, AI coding tools genuinely boost productivity — but only for certain kinds of work.
Stuff with clear templates and well-worn patterns: writing tests, generating boilerplate, refactoring known structures, docs. The speed difference is dramatic. Work that used to eat half a day wraps up in under ten minutes.
Holistic, multi-dimensional reasoning is a different story — system architecture, tech stack choices for a specific problem, weighing performance against maintainability. A few technical constraints explain why.
LLM APIs are stateless. Each request, the model only sees the tokens in that request. Want it to remember the earlier chat? You resend the whole history every turn. Context is thin, it has to guess, and guesses go wrong.
Context windows are finite. Overflow the window and the model silently drops things, stitching reasoning from whatever’s left. Research has consistently shown that longer contexts tend to degrade reasoning rather than improve it.
And there’s information poisoning. Feed the model five different approaches to the same problem and it has to reason through which one applies. More reasoning steps, higher cumulative error.
You see the pattern everywhere: someone types “build me an e-commerce site like Shopify” and waits. Ten minutes in, the AI starts drifting, hallucinating. They get frustrated and write the whole thing off. But the AI never had a real sense of what they wanted — it was doing its best with a three-line prompt.
Builders and Architects
There’s a metaphor that shows up in engineering discussions: builders versus architects.
Builders do repetitive work with clear processes — mix mortar, lay bricks, plaster walls. AI is great at this kind of work. Fast, clean, few careless mistakes.
Architects put together the master plan, balancing aesthetics, function, structure, cost. They define the framework the builders follow.
AI is a superhuman builder. The architect has to be you.
Here’s the part worth internalizing: AI works best when you’ve already thought the problem through. Sketch the architecture, hand it over, and it helps you build faster. Hand it an empty lot and say “build me a house” — you’ll get something, but it might be a chicken coop. The best prompts consistently come after the thinking, not before.
Applied to software: if your job is mostly writing repetitive, pattern-based code, AI is a real competitive threat. It’s faster and cleaner. But if you’re a Solution Architect or Business Analyst — analyzing the big picture, understanding the problem, designing solutions — AI is an ally, not a replacement.
How to use it well
Before opening the AI, sit with the problem for at least ten minutes. Sounds simple. On tired days the temptation to skip that step is real. A good prompt comes from knowing what you don’t know — not from being in a hurry.
Ask “why,” not just “how.” The AI suggests a solution — make it explain. Why this approach? What else did it consider? What’s the trade-off? If it can’t explain convincingly, it doesn’t get merged.
Read every line of generated code. Plenty of people see “it runs” and move on, but that’s how bugs you don’t understand end up in production. If you don’t understand what the AI wrote, you’ll be back asking it again in the same loop.
And once a week, do a no-AI session. Turn everything off — editor and docs. First time feels like withdrawal. After a few weeks the difference in debugging speed and clarity of thought is noticeable. Think of it as maintenance for your own reasoning.
Comments