Research shows AI helps people do parts of their job faster. In an observational study of Claude.ai data, we found AI can speed up some tasks by 80%. But does this increased productivity come with trade-offs? Other research shows that when people use AI assistance, they become less engaged with their work and reduce the effort they put into doing it—in other words, they offload their thinking to AI.
It’s unclear whether this cognitive offloading can prevent people from growing their skills on the job, or—in the case of coding—understanding the systems they’re building. Our latest study, a randomized controlled trial with software developers as participants, investigates this potential downside of using AI at work.
This question has broad implications—for how to design AI products that facilitate learning, for how workplaces should approach AI policies, and for broader societal resilience, among others. We focused on coding, a field where AI tools have rapidly become standard. Here, AI creates a potential tension: as coding grows more automated and speeds up work, humans will still need the skills to catch errors, guide output, and ultimately provide oversight for AI deployed in high-stakes environments. Does AI provide a shortcut to both skill development and increased efficiency? Or do productivity increases from AI assistance undermine skill development?
In a randomized controlled trial, we examined 1) how quickly software developers picked up a new skill (in this case, a Python library) with and without AI assistance; and 2) whether using AI made them less likely to understand the code they’d just written.
We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.
Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so—whether by asking follow-up questions, requesting explanations, or posing conceptual questions while coding independently.
I’m a senior dev who has been the tech lead over various products throughout my career and have always been really engaged. In my current software engineer role, though, most of the important product and technology decisions are made behind closed doors and handed down to my team. I’ve found that any given idea I have that isn’t a direct and logical conclusion of a decision made in the ivory tower has a 99% chance of getting shot down or ignored. So, my job is more or less to pump out whatever drivel they want every day instead of being someone driving the development of products I’m building. Unsurprisingly, AI really helps a lot with that. The fact that AI reduces my engagement is a feature, not a bug, because every time I become engaged and start getting excited about an idea, it’s always met with indifference or even disdain, leading to frustration and depression. AI has definitely improved my mental health because I can give a lot less of a shit. The CEO sent out an e-mail the other day saying our #1 priority is to use AI to make ourselves more productive.
My response:

Familiar but with a difference in my case.
I’ve spent my entire career alternating between two experiences.
One is being grilled why I an delivering what I think should be done instead of what the executives told me to do.
The other is getting awards and promotions when it turns out that I was right and the customers loved it.
It happened to work for me to do it my way, though my executives have usually simultaneously rented the implication they don’t have good vision, they also know how to leverage my success for themselves. Particularly this most recent promotion has been stalled to reward better drones instead, but it’s looking like they have to pivot back to rewarding the folks the paying customers actually like instead of those that feed the executive egos.
OP, please add a fat disclaimer at the bottom that this is from Anthropic, a major AI company.
Also, society, please learn to read all sources critically.
Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so—whether by asking follow-up questions, requesting explanations, or posing conceptual questions while coding independently.
importantly, in our own funded study, we found that those who used our product the most did the best
“you’re holding it wrong”
I would love to read an independent study on this, but this is from Anthropic (the guys that make Claude) so it’s definitely biased.
Speaking for myself, I’ve been using LLM’s to help out with jumps in small gaps of knowledge. Like for example, I know what I need to do, I just don’t know/remember the specific functions or libraries that I need to do that in Python. LLM is extremely useful for these moments; and it’s faster than searching and asking on forums. And to be transparent, I did learn a few tricks here and there.
But if someone lets the LLM do most of the work - like vibe coders - I doubt they will learn anything.
I do the same. I start with the large task, break it into smaller chunks, and I usually end up writing most of them myself. But occasionally there will be one function that is just so cookie-cutter, insignificant to the overall function of the program, and outside of my normal area of experitise, that I’ll offload that one to an LLM.
They actually do pretty well for tasks like that, when given a targeted task with very specific inputs and outputs, and I can learn a bit by looking at what it ended up generating. I’d say it’s only about 5-10% of the code that I write that falls into the category where an LLM could realistically take it on though.
I’m trying out using Claude on a problem at work that has been frustrating; lots of unexpected edge cases that require big changes.
I definitely know less about my “solution” (it’s not done yet, but it’s getting close) than if I actually sat down and did it all myself. And it’s taken a lot lot of back and forth to get to where I am.
It’d probably have gone better if, once Claude provided me a result, I went through it completely and made sure I understood every aspect of it, but man when it just spits out a full script, the urge to just run it and see if it works is strong. And if it’s close but not quite right, then the feeling is “well, let me just ask about this one part while I’m already here” and then you get a new complete script to try. And that loop continues, and I never get around to really going through and fully understanding the code.
Do you tell Claude to make a plan first?
That helps me tremendously. Whenever something needs to be modified, I tell it to update the plan first, and to stick to the plan.
That way, Claude doesn’t rewrite code that has already been implemented as part of the plan.
And understanding the plan helps understanding the code.
Sometimes if I know there will be a lot of code produced, I’ll tell it to add a quick comment on every piece it adds or modifies with a reference to the step in the plan it refers to. Makes code reviewing much more pleasant and easier to follow. And the bugs and hallucinations stick out more too.
Smart, I’ll definitely try that out.
You probably know this, but aside from that, Claude has “plan mode”. When writing, hit “Shift+Tab” a few times until you select it. Claude won’t immediately start coding when you give it instructions.
Also, ask him about the "superpowers* and “ask questions” skills. Game changers too.
Agreed, using a planning phase makes a huge difference. It will break the implementation into steps, making reviewing or manually refactoring parts of the code far more easily.
deleted by creator



