I use AI every day. Sometimes to fix a bug that’s been staring at me sideways for hours, other times to write a more civil message after yet another failed build. I’m not “against AI”. On the contrary, it’s already part of the daily workflow for almost all of us.
The problem isn’t the tool. It’s the role we’re giving it.
I’m not worried about AI becoming smarter than us. What worries me much more is us stopping to think before we even use it.
It’s like buying a kitchen robot and trusting it to make carbonara on its own. It might even taste good, but at that point, who are you really fooling?
Judgment Outsourcing
There’s a pattern I see more and more often, and I call it judgment outsourcing. We’re not just asking AI to generate content anymore. We’re handing over the part where we should evaluate, choose, and say: “Yes, this makes sense.”
The real risk isn’t AI-first development. It’s judgment outsourcing, when thinking itself is outsourced and we become little more than superficial supervisors.
The Silent Reversal
Over the past few months, I’ve seen something curious happen almost everywhere. AI was supposed to be our right hand. Instead, we’ve become AI’s right hand.
Prompts like:
- “Write an article with these parameters.”
- “Generate the complete solution.”
- “Build the system from scratch.”
And we happily click “accept”.
But when AI generates and you simply say “ok”, you’ve already outsourced judgment.
Some early research suggests this blind delegation isn’t free. Non-critical use of language models can reduce critical thinking and independent analysis, especially when people rely on answers that are “ready and plausible”.
It’s a bit like letting your child do homework with a chatbot and then bragging about their A in grammar. Spoiler: nobody learned anything. Not them. Not you.
A Controlled Scar
There was a moment when I did exactly the opposite of what I’m describing here.
I let AI write an almost complete solution for me: architecture, naming, structure. It worked. Tests passed. The feature was “done”. I accepted it.
Two weeks later, when something broke in production, I realized I couldn’t explain why it had been built that way. It wasn’t just a debugging problem. It was a lost-intention problem.
As a backend developer used to living with maintenance and technical debt, that’s a feeling I want to avoid at all costs.
The Model That Actually Works
What works for me is a coach mindset, not “hoping the machine does everything”.
When I write a text, I write it myself. Then I ask AI: “Where is this weak?”, “Is there a logical gap?”, “Can the structure be clearer?”. The result is clearer thinking, with my voice still intact.
The same applies to code. Studies on tools like Copilot show that, in certain contexts, developers using AI can work significantly faster on well-defined sub-tasks, especially when the human already knows what they want and uses AI to accelerate boilerplate, tests, edge cases, and repetitive bugs.
AI is excellent as a reviewer. It’s mediocre as an author.
Like that colleague who never invents anything, but always finds the bug five minutes before deploy.
The Backend Scar: Systems and Time
After years spent maintaining code written by others (and by myself), I’ve learned that initial speed is almost always a lie if you can’t explain the decisions you made.
As a senior backend developer, I don’t just look at “how fast we ship”. I look at:
- what happens when we need to change something in six months
- who will be on call when it blows up in production
- how much it costs to maintain that part of the system over time
If AI saves me half an hour today but costs me three late-night debugging sessions later, that’s not a good deal.
Why Delegating Creation Is Dangerous
Delegating creation to AI is convenient, but risky.
In practice, people who use AI heavily often feel faster. That’s not always true. In some studies on experienced programmers, those using AI tools perceived themselves as more efficient, but were actually slower on real tasks.
From a quality perspective, the risks are clear:
- plausible but fragile output
- loss of intentionality (you no longer know why something is built that way)
- uniform thinking, standardized style
- growing dependence on the “right prompt”
We’re already seeing the results: “vibe-coded” software that looks great in demos but has weak foundations, and articles that are formally correct yet indistinguishable from one another.
A Mirror Question
Theory is easy. Looking in the mirror is harder.
So I’ll put it like this:
If you had to defend one of your technical decisions today without AI next to you, could you do it?
And also:
How many times have you accepted a solution because it “looked right”, not because you fully understood it?
If these questions cause even mild discomfort, you’re in the right place. It means the topic actually matters to you.
Mini-Checklist: Am I Using AI Well or Poorly?
I like clear rules, so I built myself a small mental checklist.
I’m using AI poorly when:
- I couldn’t rewrite the solution without looking at it
- I didn’t decide the architecture or approach
- I’m optimizing before fully understanding the problem
- I accept things because they “seem right”, not because I validated them
I’m using AI well when:
- I already have a clear direction and solution in mind
- I need a second brain, not a replacement
- I’m reducing friction (repetition, boilerplate), not responsibility
- I could explain every choice out loud in an architecture review
If I left the team tomorrow, would the AI-assisted code still be understandable without me?
That question alone is worth more than many policies.
For Those Working in Teams
When you outsource judgment to AI as an individual, sooner or later you’re outsourcing it as a team. And when AI enters the picture, it doesn’t just change how you write code. It changes the entire team’s workflow.
Field experience and research on AI adoption show that the most visible impact isn’t only on individual developers, but on how pull requests accumulate and are reviewed. If you produce more code faster without clear intent, you flood the repository with larger PRs that are harder to review and hide more fragile decisions.
Code reviews become slower, heavier, and often more superficial, because reviewers don’t have time to reconstruct the original intent.
AI, however, can work extremely well in support of review: suggesting tests, highlighting complex areas that need clarification, proposing local refactors. In this model, the team stays in control and AI acts as an amplifier, not an uncontrolled generator of new code.
The result is slimmer PRs, better reviews, and quality that holds over time instead of chasing speed.
A Simple Rule
If you want a principle to keep next to the latest framework sticker, here it is:
Decide what to do yourself. Let AI help you do it better.
Or, in “mental debugger” form:
If you couldn’t explain what you’re doing without AI, you’re already using it the wrong way.
It’s like a GPS. It helps you find the route, but you choose the destination. If you let it decide, don’t be surprised if you end up in a McDonald’s parking lot instead of at the beach.
Who I Am in All This
I don’t want to be faster at producing output. I want to be more solid in making decisions.
If AI helps me do that, it’s an ally. If it makes decisions for me, it’s not.
And in this story, AI has a place only if it knows where it belongs.
Behind me, not in front of me.
