Is Using AI to Code Making You Less Competent? An Anthropic Study Suggests It Does

The Data Nobody Wants to Hear: Delegating Code to AI Might Be Slowing Down Your Development as a Programmer.


Let’s be honest. Most of us already use AI to write code. GitHub Copilot, Claude Code, Cursor — they’ve become part of the daily workflow for 65% of developers according to Stack Overflow. The question is no longer whether to use them, but how to use them.

And that’s where a recent study from Anthropic raises a warning worth taking seriously.

The Study

Anthropic published the results a few days ago of a randomized controlled trial — the most rigorous type of study that exists — on how using AI assistants affects learning new programming skills.

The setup was like this: 52 junior engineers with at least one year of Python experience learned Trio, an asynchronous programming library that none of them knew beforehand. They were divided into groups: some with access to AI assistants, others without it.

The main result: developers who used AI to write code scored 17% lower on comprehension tests compared to those who did it manually.

Seventeen percent. That’s not a small number.

The Detail That Changes Everything

Now, the study doesn’t say AI is bad. It says something much more nuanced and useful.

Within the same group of AI users, performance was radically different depending on how they used the tool:

  • Those who used AI to ask conceptual questions — understand the “why” behind the code — scored 65% or higher on comprehension tests.
  • Those who used it to generate code directly — asking AI to write the solution — scored less than 40%.

The difference wasn’t the tool. It was the type of interaction with it.

The Concept Behind: Cognitive Offloading

The researchers identified the central tension as “cognitive engagement” vs “cognitive offloading”.

When you ask AI to solve the problem for you, your brain isn’t processing the solution — it’s just checking if it sounds reasonable. That’s cognitive offloading. And while it’s convenient in the short term, the brain doesn’t learn what it doesn’t need to actively process.

On the other hand, when you use AI to explore concepts, ask questions, understand the reasoning behind a design decision — you’re still the one building the mental model. AI acts as a tutor, not a substitute.

The difference matters especially when you face unfamiliar code, complex bugs, or architecture decisions — precisely the situations where you need your own judgment most.

But Wait — There’s Another Side

Anthropic itself has previous research showing that AI can reduce task completion time by up to 80% when the developer already masters the relevant skills. And in the field, the numbers are striking: approximately 4% of all commits on GitHub are already written by Claude Code.

So the complete picture is this:

  • If you already master the area: AI makes you noticeably more productive.
  • If you’re learning something new: relying on AI to generate code could be slowing down your actual development.

This makes intuitive sense. A senior developer using Copilot to speed up repetitive tasks already has the judgment to evaluate the output. A junior who lets AI write all the code never develops that judgment — and that becomes a silent problem.

The Paradox of the 2026 Dev

Here’s the paradox this study puts on the table: the tools that make you feel most productive could be the same ones limiting your long-term growth.

And it’s a real paradox, not just theoretical. A developer in the METR study (an AI research organization) confessed: “I’m torn. I’d like to help with the data, but I really like using AI.” When asked to do 50% of their work without AI — at $50/hour — many simply refused. The dependency is already real.

Anthropic doesn’t say “don’t use AI to learn.” What it recommends is something more specific: intentionally design how you use it, especially in learning contexts.

How to Use It Better Then?

Based on the study’s findings, here’s a practical framework:

When you’re learning something new:
Use AI to ask questions, explore concepts, and understand the reasoning — not to generate the solution. Write the code yourself even if it’s slower. The friction is the learning.

When you already master the area:
Use it to accelerate. Generate boilerplate, write tests, document, refactor — here cognitive offloading is an advantage, not a risk.

In any case:
Develop the habit of understanding the code AI generates before using it. Not as a security audit (though that matters too), but as deliberate practice in comprehension.

The Question For You

85% of devs already use AI regularly. But how many are actively thinking about how they use it versus simply how much they use it?

This study isn’t an argument to stop using these tools — that would be naive and pointless. It’s an argument for using them more consciously about what you’re sacrificing when you let them think for you.

What’s your experience? Do you feel AI has affected — for better or worse — your ability to solve problems without it? Comments are open.


Sources: Anthropic Research, InfoQ, METR, Stack Overflow Developer Survey 2025, MIT Technology Review