Okay but Grok is still surely part of the “Anxiety around AI is growing rapidly in the US, research shows” phenomena, as Grok is one of the various AIs that people are aware of, and anxious about.
Your words read to me like you have kept yourself aware of the positive benefits of using AI - which many people on Lemmy including to some degree myself - have done far less of.
For the past year+ it has been popular sport to ask AI a question and poke fun at how wrong the answer is. I, too, get plenty of wrong answers from it - and anyone who trusts what it, or a Google search, or some post by some random troll with an axe to grind on some social media site, or even your high school whatever teacher, without verifying the results… gets what they deserve, in my opinion.
What changed for me within the last 12-16 months is: at least around questions in software development, the answers started being correct more than half the time. That was a critical watershed, because in essence that means that if you give your AI the tool to test its own work, it can work on hard problems that have easy methods to test for correctness (starting with compiler errors), and basically chip away at them - fixing problems until it has an answer that is correct enough to pass all the tests you have specified for it. Before that, an AI agent left to work on problems without guidance would more often get stuck in loops, or run off the rails altogether and never reach a viable solution.
In the past 6 months or so, tools like Claude have gotten much better - incorporating a lot of the kinds of things I (and many others) had to “tell them” manually 12 months ago to get good results into their normal response algorithms, anticipating and fixing problems in their work before presenting it as a solution for your consideration.
The language they present solutions in has been traditionally too over-confident, that’s a huge fault which I attribute to being trained on blog posts by know-it-all blowhard people who similarly present their ideas as gospel truth rather than their potentially flawed best efforts.
Clue for the clueless: even the best human experts in their fields are still only providing potentially flawed best effort answers. Once you leave self-defined fields like mathematics, all we have are our best guesses about how things really work.
One thing that your comments touch on here is just how little of the “Anxiety around AI” actually has to do with AI.
When e.g. Oracle lays off 30k workers, how little of that truly has to do with AI? vs. instead market instability etc. What complicates the issue is that most often, the corporation will claim that the layoffs are to better streamline the company in a future where AI will need fewer workers, so to prepare for that now… they’ll just go ahead and get rid of them immediately.
So this isn’t even people using AI inappropriately, this is people blaming AI for what they wanted to do anyway, for reasons if profit.
Then again, events such as those presage what is to come: when AI truly can do it all, how will humans be able to earn a paycheck? Spoiler alert: not all of us will. And especially in the meantime there will be period of transition and upheaval.
This is what I felt your comments lacked acknowledgement of: not the downside to using the tools but the wider conversation that uses the keyword “AI” but has really barely anything to do with it, as opposed to political and social and economic forces.
I felt your comments lacked acknowledgement of: not the downside to using the tools but the wider conversation that uses the keyword “AI” but has really barely anything to do with it
Yeah, I get tunnel vision like that, when people say “AI is a problem” my focus is on the AI, not the people’s underlying pre-existing problems that haven’t gone away since AI “came out / got big”.
The word itself keeps changing its meaning - it used to mean ML techniques, then looking forward to gen-AI, now it supposedly means “capitalism distilled”? See e.g. https://www.structural-integrity.eu/is-there-a-need-for-ai-after-capitalism/ for an excellent example of the kind of anxiety surrounding AI that we are talking about.
I agree with you that ML itself is not a problem, nor even is LLM technology. Although like nuclear power, as we advance towards true AI the more powerful the tool the greater danger its misuse portends, as you said. And also as you said, as it got big the discussion moved towards the latter topic, without bothering to be precise in what was being discussed, instead calling everything by the (clickbait?) buzzword “AI”.
The “danger line” I perceive is when we give anything “agency”. It can be a float-level-switch on a lake controlling the water release gates on a dam, such a simple thing, but if it has a malfunction (and nobody notices in time) the dam might get over-topped with water, or the whole lake might be emptied - potentially flooding downstream communities, or simply wasting valuable water needed to get through the next dry season… all that from a simple little (binary) bit of “artificial intelligence” - but when it’s granted “agency” to operate the flood gates without competent oversight, it becomes dangerous.
May 6, 2010 a large collection of automated trading algorithms, acting with agency too fast for anyone to manage caused a dramatic flash-crash of the stock market.
Lately, we’ve got <a href=“https://en.wikipedia.org/wiki/ELIZA”>ELIZA</a> gone wild in advanced chat-bots. People who allow themselves to be sucked into the fantasy that the chatbot “is real” like a person they can trust are giving those chat-bots agency in their lives - and with a baseline of 132 suicides per DAY in the US alone, of course there will be some people whose decision to take their own life was influenced, both for and against, by their interaction with chat-bots.
I give the LLMs (limited) agency in the creation of software. I like to think I employ a risk-based approach, giving more agency and less oversight in simple applications with limited to near-zero risk while providing stricter oversight and review for LLM generated code which has more important functions / greater risk of harm should it malfunction… Of course, these are judgement calls, and with millions of people using LLMs to generate code, even if they all follow a similar risk-based approach to how much unrestricted agency the LLM is given, there will be those who make bad judgement calls…
Then there’s the YOLOs - pushing the boundaries as hard and fast as they can in some sort of quest to be the first to achieve something great. As Olivander said to Harry Potter: “He who must not be named did great things, terrible to be sure, but also great.”
(I did not downvote you btw)
Okay but Grok is still surely part of the “Anxiety around AI is growing rapidly in the US, research shows” phenomena, as Grok is one of the various AIs that people are aware of, and anxious about.
Your words read to me like you have kept yourself aware of the positive benefits of using AI - which many people on Lemmy including to some degree myself - have done far less of.
But there are some negatives as well…
There’s plenty of negatives to any new tech, anything can be carelessly or ignorantly mis-applied.
The computer has been coming for our jobs since it was created. Bob Cratchit no longer works for Ebeneezer Scrooge, he’s been replaced with software.
People over-trusting software has been problematic since software became accessible to be over-trusted. A favorite (horrible) example from not-so-long ago, but pre-ChatGPT release I believe: https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/
For the past year+ it has been popular sport to ask AI a question and poke fun at how wrong the answer is. I, too, get plenty of wrong answers from it - and anyone who trusts what it, or a Google search, or some post by some random troll with an axe to grind on some social media site, or even your high school whatever teacher, without verifying the results… gets what they deserve, in my opinion.
What changed for me within the last 12-16 months is: at least around questions in software development, the answers started being correct more than half the time. That was a critical watershed, because in essence that means that if you give your AI the tool to test its own work, it can work on hard problems that have easy methods to test for correctness (starting with compiler errors), and basically chip away at them - fixing problems until it has an answer that is correct enough to pass all the tests you have specified for it. Before that, an AI agent left to work on problems without guidance would more often get stuck in loops, or run off the rails altogether and never reach a viable solution.
In the past 6 months or so, tools like Claude have gotten much better - incorporating a lot of the kinds of things I (and many others) had to “tell them” manually 12 months ago to get good results into their normal response algorithms, anticipating and fixing problems in their work before presenting it as a solution for your consideration.
The language they present solutions in has been traditionally too over-confident, that’s a huge fault which I attribute to being trained on blog posts by know-it-all blowhard people who similarly present their ideas as gospel truth rather than their potentially flawed best efforts.
Clue for the clueless: even the best human experts in their fields are still only providing potentially flawed best effort answers. Once you leave self-defined fields like mathematics, all we have are our best guesses about how things really work.
One thing that your comments touch on here is just how little of the “Anxiety around AI” actually has to do with AI.
When e.g. Oracle lays off 30k workers, how little of that truly has to do with AI? vs. instead market instability etc. What complicates the issue is that most often, the corporation will claim that the layoffs are to better streamline the company in a future where AI will need fewer workers, so to prepare for that now… they’ll just go ahead and get rid of them immediately.
So this isn’t even people using AI inappropriately, this is people blaming AI for what they wanted to do anyway, for reasons if profit.
Then again, events such as those presage what is to come: when AI truly can do it all, how will humans be able to earn a paycheck? Spoiler alert: not all of us will. And especially in the meantime there will be period of transition and upheaval.
This is what I felt your comments lacked acknowledgement of: not the downside to using the tools but the wider conversation that uses the keyword “AI” but has really barely anything to do with it, as opposed to political and social and economic forces.
Yeah, I get tunnel vision like that, when people say “AI is a problem” my focus is on the AI, not the people’s underlying pre-existing problems that haven’t gone away since AI “came out / got big”.
The word itself keeps changing its meaning - it used to mean ML techniques, then looking forward to gen-AI, now it supposedly means “capitalism distilled”? See e.g. https://www.structural-integrity.eu/is-there-a-need-for-ai-after-capitalism/ for an excellent example of the kind of anxiety surrounding AI that we are talking about.
I agree with you that ML itself is not a problem, nor even is LLM technology. Although like nuclear power, as we advance towards true AI the more powerful the tool the greater danger its misuse portends, as you said. And also as you said, as it got big the discussion moved towards the latter topic, without bothering to be precise in what was being discussed, instead calling everything by the (clickbait?) buzzword “AI”.
The “danger line” I perceive is when we give anything “agency”. It can be a float-level-switch on a lake controlling the water release gates on a dam, such a simple thing, but if it has a malfunction (and nobody notices in time) the dam might get over-topped with water, or the whole lake might be emptied - potentially flooding downstream communities, or simply wasting valuable water needed to get through the next dry season… all that from a simple little (binary) bit of “artificial intelligence” - but when it’s granted “agency” to operate the flood gates without competent oversight, it becomes dangerous.
May 6, 2010 a large collection of automated trading algorithms, acting with agency too fast for anyone to manage caused a dramatic flash-crash of the stock market.
Lately, we’ve got <a href=“https://en.wikipedia.org/wiki/ELIZA”>ELIZA</a> gone wild in advanced chat-bots. People who allow themselves to be sucked into the fantasy that the chatbot “is real” like a person they can trust are giving those chat-bots agency in their lives - and with a baseline of 132 suicides per DAY in the US alone, of course there will be some people whose decision to take their own life was influenced, both for and against, by their interaction with chat-bots.
I give the LLMs (limited) agency in the creation of software. I like to think I employ a risk-based approach, giving more agency and less oversight in simple applications with limited to near-zero risk while providing stricter oversight and review for LLM generated code which has more important functions / greater risk of harm should it malfunction… Of course, these are judgement calls, and with millions of people using LLMs to generate code, even if they all follow a similar risk-based approach to how much unrestricted agency the LLM is given, there will be those who make bad judgement calls…
Then there’s the YOLOs - pushing the boundaries as hard and fast as they can in some sort of quest to be the first to achieve something great. As Olivander said to Harry Potter: “He who must not be named did great things, terrible to be sure, but also great.”