Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.
Someone pointed out that the “Science, Public Health Policy and the Law” website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.
The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.
Note that the study with its original title got far less upvotes than the click-bait summary did 🤡
The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there’s a clear link between vaccines and autism.
Neat.
Thanks for the warning. Here’s the link to the original study, so we don’t have to drive traffic to that guys website.
https://arxiv.org/abs/2506.08872
I haven’t got time to read it and now I wonder if it was represented accurately in the article.
That’s a math article
Fixed. Thanks!
Thanks for pointing this out. Looking closer I see that that “journal” was definitely not something I want to be sending traffic to, for a whole bunch of reasons - besides anti-vax they’re also anti-trans, and they’re gold bugs… and they’re asking tough questions like “do viruses exist” 🤡
I edited the post to link to MIT instead, and added a note in the post body explaining why.
Public health flat earthers
Isn’t that the same guy that plays Michael Bolton in Office Space?
For those wondering: Scruffy, Roberto, and WERNSTROM
So if someone else writes your essays for you, you don’t learn anything?
I just asked ChatGPT if this is true. It told me no and to increase my usage of AI. So HA!
relying on AI makes people stupid?
Who knew?
cognitive decline.
Another reason for refusing those so-called tools… it could turn one into another tool.
More like it would cause you to need the tool in order to be the tool that you are already mandated to be.
It’s a clickbait title. Using AI doesn’t actually cause cognitive decline. They’re saying using AI isn’t as engaging for your brain as the manual work, and then broadly linking that to the widely understood concept that you need to engage your brain to stay sharp. Not exactly groundbreaking.
Sir this is Lemmy & I’m afraid I have to downvote you for defending AI which is always bad. /s
Anyone who doubts this should ask their parents how many phone numbers they used to remember.
In a few years there’ll be people who’ve forgotten how to have a conversation.
I don’t see how that’s any indicator of cognitive decline.
Also people had notebooks for ages. The reason they remembered phone numbers wasn’t necessity, but that you had to manually dial them every time.
And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.
—a story told by Socrates, according to his student Plato
The other day I saw someone ask ChatGPT how long it would take to perform 1.5 million instances of a given task, if each instance took one minute. Mfs cannot even divide 1.5 million minutes by 60 to get get 25,000 hours, then by 24 to get 1,041 days. Pretty soon these people will be incapable of writing a full sentence without ChatGPT’s input
Edit to add: divide by 365.25 to get 2.85 years. Anyone who can tell me how many months that is without asking an LLM gets a free cookie emoji
I want a free cookie emoji!
I didn’t ask an LLM, no, I asked Wikipedia:
The mean month-length in the Gregorian calendar is 30.436875 days.
Edit: but since I already knew a year is 365.2425 I could, of course, have divided that by the 12 months of a year to get that number.
So,
1041 ÷ 30.436875 ≈ 34 months and…
0.2019343313 × 30.436875 ≈ 6 days and…
0.146249999987 × 24 ≈ 3 hours and…
0.509999999688 × 60 ≈ 30 minutes and…
0.59999998128 × 60 ≈ 35 seconds and…
0.9999988768 × 1000 ≈ 999 milliseconds and
0.9999988768 × 1000000 ≈ 999999 nanoseconds
34 months + 6d 3h 30m 35s 999ms 999999 ns (or we could call it 36s…)
Edit: 34 months is better known as 2 years and 10 months.
🍪
You got as far as nanoseconds so here’s a cupcake for extra credit too 🧁
Thank you, you really didn’t have to. That cupcake is truly the icing and it’s almost too much! I’ll give you this giant egg of unknown origin: 🥚 in return, as long as you promise to use it for baking and making some more of those cupcakes for whoever else needs or deserves one within the next few days, hours, minutes, seconds, milliseconds and 999999 bananoseconds 🍌
Rough estimate using 30 days as average month would be ~35 months (1050 = 35×30). The average month is a tad longer than 30 days, but I don’t know exactly how much. Without a calculator, I’d guess the total result is closer to 34.5. Just using my own brain, this is as far as I get.
Now, adding a calculator to my toolset, the average month is 365.2425 d / 12 m = 30.4377 d/m. The total result comes out to about 34.2, so I overestimated a little.
Also, the total time is 1041.66… which would be more correctly rounded to 1042, but has negligible impact on the redult.
Edit: I saw someone else went even harder on this, but for early morning performance, I’m satisfied with my work
🍪
Pirat gave me an egg emoji, so I baked some more cupcake emojis. Have one for getting it so close without even using a calculator 🧁
I hope your weekend is as awesome as you are
I swear the companies hard code solutions for weird edge cases so their investors are fooled into believing that their LLMs are getting smarter.
You forgot doing the years, which is a bit trickier if we take into account the leap years.
According to the Gregorian calendar, every fourth year is a leap year unless it’s divisible by 100 – except those divisible by 400 which are leap years anyway. Hence, the average length of one year (over 400 years) must be:
365 + 1⁄4 − 1⁄100 + 1⁄400 = 365.2425 days
So,
1041 / 365.2425 ≈ 2.85 years
Or 2 years and…
0.850161194275 × 365.2425 ≈ 310 days and…
0.514999999987 × 24 ≈ 12 hours and…
0.359999999688 × 60 ≈ 21 minutes and…
0.59999998128 × 60 ≈ 36 seconds
1041 days is just about 2y 310d 12h 21m 36s
Wtf, how did we go from 1041 whole days to fractions of a day? Damn leap years!
Had we not been accounting for them, we would have had 2 years and…
0.852054794521 × 365 = 311.000000000165 days
Or simply 2y 311d if we just ignore that tiny rounding error or use fewer decimals.
Engineers be like…
1041/365 =2,852
.852*365=310.980
Thus 2 y 311 d. Or really, fuck it 3 y
Edit. #til
The lemmy app on my phone does basic calculator functions.
Or really, fuck it 3 y
Seems about right! But really, it often seems pretty useful to me, since it removes a lot of unnecessary information thoughout a content feed or thread, though I usually still want to be able to see the exact date and time when tapping or hovering over the value for further context.
Edit: However, the lemmy client I use, Eternity, shows the entire date and time for each comment instead of the age of it, and I’m fine with that too, but unsure what I actually prefer…
The lemmy app on my phone does basic calculator functions.
Which client and how?
I already have seen a massive decline personally and observationally (watching other people) in conversation skills.
Most people now to talk to each other like they are exchanging internet comments. They don’t ask questions, they don’t really engage… they just exchange declaratory sentences. Heck most of the dates I went on the past few years… zero real conversation and just vague exchanges of opinion and commentary. A couple of them went full on streamer, like just ranting at me and randomly stopping to ask me nonsense questions.
Most of our new employees the past year or two really struggle with any verbal communication and if you approach them physically to converse about something they emailed about they look massively uncomfortable and don’t really know how to think on their feet.
Before the pandemic I used to actually converse with people and learn from them. Now everyone I meet feels like interacting with a highlight reel. What I don’t understand is why people are choosing this and then complaining about it.
They’ll have forgotten how to remember anything.
That doesn’t require a few years, there are loads of people out there already who have forgotten how to have a conversation
Especially moderators, who typically are the polar opposite nog the word. You disagree with my factually incorrect statement? Ban. Problem solved. You disagree with my opinion? Ban.
Similarly I’ve seen loads of users on Lemmy (and before or reddit) that just ban anyone who asks questions or who disagrees.
It’s so nice and easy, living in a echo chamber, but it does break your brain
I could remember so many phone numbers nowadays I just click their names on my rectangle, the future sucks and is weakening us !
People don’t memorize phone numbers anymore? Why not? Dialing is so much quicker than searching your contacts for the right person.
This is the furthest thing from my experience lol I can type 2 letters in my phone, see the right name and press call. I haven’t memorised a phone number since before the year 2000* (*hyperbole)
I still remember all my family’s phone numbers from when I was a kid growing up In WV in the 70s
I currently have my wife’s number memorized and that’s it. Not my mom, my kids, friends, anybody. I just don’t have to. It’s all in my phone.
But I’m also of the opinion that NOT having this info in my head has freed it up for more important things. Like memes and cat videos 🤣
But seriously, I don’t think this tool, and AI is just a tool, is dumbing me down. Yes I think about certain things less, but it allows me to ask different or better questions, and just learn differently. I don’t necessarily trust everything it spits out, I double check all code it produces, etc. It’s very good at explaining things or providing other examples. Since I’m older, I’ve heard similar arguments about TV and/or the Internet. LLMs are a very interesting tool that have good and bad uses. They are not intelligent, at least not yet, and are not the solution to everything technical. They are very resource intensive and should be used much more judiciously then the currently are.
Ultimately it boils down to if you’re lazy, this allows you to be more lazy. If you don’t want to continue learning and just rely on it, you are gonna have a bad time. Be skeptical, questioning, employee critical thinking, take in information from lots of sources, and in the end you will be fine. That is unless it becomes sentient and wipes us all out.
Thank you for providing a better Source and editing the post!
Been vibe coding hard for a new project this past week. It’s been working really well but I feel like I watched a bunch of TV. Like it’s passive enough like I’m flipping through channel, paying a little attention and then going to the next.
Where as coding it myself would engage my brain and it might feel like reading.
It’s bizarre because I’ve never had this experience before.
Are history teachers wasting their time?
what should we do then? just abandon LLM use entirely or use it in moderation? i find it useful to ask trivial questions and sort of as a replacement for wikipedia. also what should we do to the people who are developing this ‘rat poison’ and feeding it to young people’s brains?
edit: i also personally wouldn’t use AI at all if I didn’t have to compete with all these prompt engineers and their brainless speedy deployments
Thing is, that “trivial question asking” is part of what causes this phenomenon
The abstract seems to suggest that in the long run you’ll out perform those prompt engineers.
How does it suggest that?
in the long run won’t it just become superior to what it is now and outperform us? the future doesn’t look bright tbh for comp sci, only good paths i see is if you’re studying AI/ML or Security
Spoiler: no, it will not
so avoid LLMs entirely when programming and also studying AI/ML isnt a good idea?
Probably studying AI/ML or security is a fine choice if that’s what you want to do. But if you want to go into CS, it’s probably not a bad choice to do. IMO it’s much less likely that AI will completely replace all or even many engineers (or people in other industries).
I do not see how it can be a good or bad idea. Do whatever you want to do, however is best for you
what should we do then?
i also personally wouldn’t use AI at all if I didn’t have to compete with all these prompt engineers and their brainless speedy deployments
Gotta argue that your more methodical and rigorous deployment strategy is more cost efficient than guys cranking out big ridden releases.
If your boss refuses to see it, you either go with the flow or look for a new job (or unionize).
I’m not really worried about competing with the vibe coders. At least on my team, those guys tend to ship more bugs, which causes the fire alarm to go off later.
I’d rather build a reputation of being a little slower, but more stable and higher quality. I want people to think, “Ah, nice. Paequ2 just merged his code. We’re saved.” instead of, “Shit. Paequ2 just merged. Please nothing break…”
Also, those guys don’t really seem to be closing tickets faster than me. Typing words is just one small part of being a programmer.
you should stop using it and use wikipedia.
being able to pull relevant information out of a larger of it, is a incredibly valuable life skill. you should not be replacing that skill with an AI chatbot
so i shouldnt be using LLMs at all? what is your use case?
16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.
Better late than never. Good catch.
What a ridiculous study. People who got AI to write their essay can’t remember quotes from their AI written essay? You don’t say?! Those same people also didn’t feel much pride over their essay that they didn’t write? Hold the phone!!! Groundbreaking!!!
Academics are a joke these days.
I see you skipped that part of academia where they taught that, in science, there are steps between hypothesis and conclusion even if you already think you know the answer.
Or one could entirely skip the part where they read the study beyond the headline.
I did. Did you?
And using a calculator isn’t as engaging for your brain as manually working the problem. What’s your point?
Seems like you’ve made the point succinctly.
Don’t lean on a calculator if you want to develop your math skills. Don’t lean on an AI if you want to develop general cognition.
I don’t think this is a fair comparison because arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics. Any human that doesn’t have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.
The really useful aspects of math are things like how to think quantitatively. How to formulate a problem mathematically. How to manipulate mathematical expressions in order to reach a solution. For the most part these are not things that calculators do for you. In some cases reaching for a calculator may actually be a distraction from making real progress on the problem. In other cases calculators can be a useful tool for learning and building your intuition - graphing calculators are especially useful for this.
The difference with LLMs is that we are being led to believe that LLMs are sufficient to solve your problems for you, from start to finish. In the past students who develop a reflex to reach for a calculator when they don’t know how to solve a problem were thwarted by the fact that the calculator won’t actually solve it for them. Nowadays students develop that reflex and reach for an LLM instead, and now they can walk away with the belief that the LLM is really solving their problems, which creates both a dependency and a misunderstanding of what LLMs are really suited to do for them.
I’d be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks. That might also help mitigate the fact that LLMs don’t reliably know the answers: if the user is presented with a leading question instead of an answer then they’re still left with the responsibility of investigating and validating.
But that doesn’t leave users with a sense of immediate gratification which makes it less marketable and therefore less opportunity to profit…
arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics.
I’d consider it foundational. And hardly small or inconsequential given the time young people spend mastering it.
Any human that doesn’t have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.
With time and training, sure. But simply handing out calculators and cutting math teaching budgets undoes that.
This is the real nut of comparison. Telling kids “you don’t need to know math if you have a calculator” is intended to reduce the need for public education.
I’d be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks.
But the economic vision for these tools is to replace workers, not to enhance them. So the developers don’t want to do that. They want tools that facilitate redundancy and downsizing.
But that doesn’t leave users with a sense of immediate gratification
It leads them to dig their own graves, certainly.
Don’t lean on an AI if you want to develop general
cognitionessay writing skills.Sorry the study only examined the ability to respond to SAT writing prompts, not general cognitive abilities. Further, they showed that the ones who used an AI just went back to “normal” levels of ability when they had to write it on their own.
the ones who used an AI just went back to “normal” levels of ability when they had to write it on their own
An ability that changes with practice
It’s important to know these things as fact instead of vibes and hunches.
Sure, and it’s important to know how to perform math functions without a calculator. But once you learn it, and move on to something more advanced or day-to-day work, you use the calculator.
I don’t always use the calculator.
Do you bench press 100 lbs and then give up on lifting altogether?
Do you believe that using AI locks you out of doing something any other way again?
Why would I?
Well what do you mean with the lifting metaphor?
Many people who use AI are doing it to supplement their workflow. Not replace it entirely, though you wouldn’t know that with all these ragebait articles.
I mean what I said. Working unaided begets strength.
Yeah, I went over there with ideas that it was grandiose and not peer-reviewed. Turns out it’s just a cherry-picked title.
If you use an AI assistant to write a paper, you don’t learn any more from the process than you do from reading someone else’s paper. You don’t think about it deeply and come up with your own points and principles. It’s pretty straightforward.
But just like calculators, once you understand the underlying math, unless math is your thing, you don’t generally go back and do it all by hand because it’s a waste of time.
At some point, we’ll need to stop using long-form papers to gauge someone’s acumen in a particular subject. I suspect you’ll be given questions in real time and need to respond to them on video with your best guesses to prove you’re not just reading it from a prompt.
You better not read audiobooks or learn form videos either. That’s pure brianrot. Too easy.
Look at this lazy fucker learning trig from someone else, instead of creating it from scratch!
You would learn quite a lot creating it from scratch.
LoL. These damn kids! No one wants to re-invent the wheel anymore! Well, if you’re not duplicating the works of Hipparchus of Nicaea, you’re a lazy good for nothing!
Damn, it’d be crazy if I actually said that.
Oh, thank god you made sure to clarify you didn’t. Someone may have gotten confused!
Does this also explain what happens with middle and upper management? As people have moved up the ranks during the course of their careers, I swear they get dumber.
My dad around 1993 designed a cipher better than RC4 (I know it’s not a high mark now, but it kinda was then) at the time, which passed audit by a relevant service.
My dad around 2003 still was intelligent enough, he’d explain me and my sister some interesting mathematical problems and notice similarities to them and interesting things in real life.
My dad around 2005 was promoted to a management position and was already becoming kinda dumber.
My dad around 2010 was a fucking idiot, you’d think he’s mentally impaired.
My dad around 2015 apparently went to a fortuneteller to “heal me from autism”.
So yeah. I think it’s a bit similar to what happens to elderly people when they retire. Everything should be trained, and also real tasks give you feeling of life, giving orders and going to endless could-be-an-email meetings makes you both dumb and depressed.
that’s the peter principle.
people only get promoted so far as their inadequacies/incompetence shows. and then their job becomes covering for it.
hence why so many middle managers primary job is managing the appearance of their own competence first and foremost and they lose touch with the actual work being done… which is a key part of how you actually manage it.
Yeah, that’s part of it. But there is something more fundamental, it’s not just rising up the ranks but also time spent in management. It feels like someone can get promoted to middle management and be good at the job initially, but then as the job is more about telling others what to do and filtering data up the corporate structure there’s a certain amount of brain rot that sets in.
I had just attributed it to age, but this could also be a factor. I’m not sure it’s enough to warrant studies, but it’s interesting to me that just the act of managing work done by others could contribute to mental decline.
That was my first reaction. Using LLMs is a lot like being a manager. You have to describe goals/tasks and delegate them, while usually not doing any of the tasks yourself.
Fuck, this is why I’m feeling dumber myself after getting promoted to more senior positions and had only had to work in architectural level and on stuff that the more junior staffs can’t work on.
With LLMs basically my job is still the same.
After being out of being a direct practitioner, I will say all my direct reports are “faster” in programs we use at work than I am, but I’m still waaaaaaaaaay more efficient than all of them (their inefficiencies drive me crazy actually), but I’ve also taken up a lot of development to keep my mind sharp. If I only had my team to manage and not my own personal projects, I could really see regressing a lot.
That’s the Peter Principle.
I’d expect similar at least. When one doesn’t keep up to date on new information and lets their brain coast it atrophies like any other muscle would from disuse.