The Internet being mostly broken at this point is driving me a little insane, and I can’t believe that people who have the power to keep a functioning search engine for themselves wouldn’t go ahead and do it.
I wonder about this every time I see people(?) crowing about how amazing AI is. Like, is there some secret useful AI out there that plebs like me don’t get to use? Because otherwise, huh?


No. They’re drinking their own coolaid.
They’ve offloaded what little thinking they did to LLMs (not that LLMs can think, but in this case it makes no difference), and at this point would no longer be able to function if they had to think for themselves.
Don’t think of them as human people with human needs.
They’re mere parasites, all higher functions withered away through lack of use, now more than ever.
They could die and be replaced by their chatbots, and we wouldn’t notice a difference.
I’m not sure Google has offloaded all of their thinking to LLMs.
Google still employs very very smart people.
They’d just have to be morally bankrupt human refuse to be contributing actively to the profit-driven destruction of the internet and mass public surveillance like they are, so the rest of your points still stand.
And while a lot of that intelligence may be wasted, it’s more a function of banal evil and corporate bloat than LLMs.
We’re talking execs here, not people.
Of course they’ve got smart people they’re still in the process of getting rid of, but they’re not who the OP was asking about, and they’re mostly irrelevant anyway (and have been since long before LLMs became a problem), since they’re not the ones making decisions.
(Even when talking about smart people, though, being smart about certain things doesn’t mean they’re immune to LLMs. If those things are good at anything it’s catfishing people into believing they’re actually intelligent and useful for something, and many a smart developer or scientist involved in their development has fallen for their stochastic bullshit. And once the brain damage has set in it appears to be quite permanent.)
While I agree that execs are not people, I don’t think they’re being controlled by LLMs.
They’re already idiots for the most part though so what does it matter.
The most horrific part, is that we can’t tell the difference.
Controlled by LLMs or not, their actions would be indistinguishable.
Controlled by LLMs perhaps not, but I believe that the execs pushing AI are drinking as much AI Kool aid as anyone you know who has AI psychosis. That could be why AI is so sycophantic. That is the social model execs in the big 7 want the world to treat them, and they’ve drank so much of their own Kool aid that they believe it now.
They’re obsessed. When there’s manufactured outrage it’ll start out as sensible but quickly evolves into the radicals that spew what you see up top. Ai and chat bots have issues but the push to convince the public to hate it was heavy on lemmy. So now there’s these radicals that are living in their own toxic fantasy.
I started tinkering with ai right around the time ChatGPT rose to prominence. Locally. On my own machine.
I’m not a doctoral level researcher but I mostly get the tech.
I couldn’t agree more. People use ai as a blanket term and don’t understand the difference between an LLM and GAN or any of the dozens of other kinds of models.
If it’s ai it’s bad. Just full stop. Like. The anger of people decrying the death of artistic beauty on subs that prominently feature ms paint stick figure drawings and shitty distorted images makes no sense to me. This isn’t costing anyone’s job. It’s fucking garbage content, with no agenda, and always was.
Having autonomous LLMs posting things is problematic but have ai generated shitposts isn’t.
There is fuck all wrong with using ai to make art to hang on your walls, or funny t shirts, or ridiculous banners, or funny pictures to share with friends. The people that decry the death of art have never bought anything in a gallery, they were fine with artists getting paid fuck all before ai. They weren’t contributing to artists’ living in any meaningful way.
And like. The most vocal critics seem to understand the least about it. Such that they hate it because it’s made with ai just assume that someone’s made it using OpenAI because that’s the only thing their rage-addled minds can process existing.
They say it’s theft and we should ban everything (how’s that working out for you?) instead of clamouring for fair compensation for anyone whose work is being used to train a model.
They’ll yell: all these models are based on theft. And sure. But a) I don’t give a flying fuck about a corporation’a right to exploit an artist and profit off their work and never have. And b) will respond to the suggestion that we create new models that fairly compensate people by yelling louder and becoming irate.
They’re not rational. There are many valid criticisms of the tech, but you can’t even talk to these people about addressing them. Because a lot of the criticisms can and should be addressed. They won’t hear it.
I’ve commissioned paid art for rpg campaigns, and I can’t draw a distinction between AI and LLMs because I get yelled at by people saying, “Its just the name of the field! Nobody thinks the Sims games are actually intelligent!”
So am I allowed to draw that line now? And do you see that me using a comfyui on my local machine does actually mean an artist won’t get paid? This position isn’t 100% strawman.
My main issue is that I think maybe they can’t be patched because they’re not deterministic systems, and I have personally been asked by an executive whether a team could reasonably be replaced by LLM agents behind their backs as soon as the tech was available. How was I supposed to form your opinion given those experiences and why do you think that I’m rage addled rather than just tired?
I don’t really understand what you’re getting at here.
I don’t think they are drinking their own cool aid.
Meta’s Zuck and tiktok ceo don’t let their kids on their respective short form content platforms because they know its harmful effects.
They are smart enough to know not to dip into their stash.
I think they definitely have their own version of it.
Nah, you can actually see some of them developing AI psychosis.
https://medium.com/write-a-catalyst/this-prominent-vc-investor-just-had-a-chatgpt-induced-psychosis-on-twitter-heres-what-this-means-197ae5df77f4
You’ve got to understand that most AI execs aren’t technical people, they’re hype men. And LLMs are weirdly good at hype and the illusion of technical correctness. So they don’t have a problem with it.
Sam Altman saying he uses chatgpt to tell him how to act with his baby is one of the things he’s said I actually believe. Of course he’s also a got a team of nannies he couldn’t be bothered to mention, but the trust for chatgpt is there.
That’s not a thing