It’s not AI winter just yet, though there is a distinct chill in the air. Meta is shaking up and downsizing its artificial intelligence division. A new report out of MIT finds that 95 percent of companies’ generative AI programs have failed to earn any profit whatsoever. Tech stocks tanked Tuesday, regarding broader fears that […]
This is only one study, but I saw an article a few months ago talking about a study by a major phone company that found that the vast majority of people (80% or more IIRC) either didn’t care about AI features on their phones or actively disliked them.
I think most people don’t really care one way or another but hate that it’s being shoved into everything, and those who know the stats on how often it’s wrong are a lot more likely to actively dislike it and be vocal about their dislike.
That sounds quite possible, AI features on phones/OSs go mostly unused –according to my study, which has a sample of size who the hell knows and a methodology of I feel–.
But llms I think, although burning money, are quite accepted by the people who touch them, and do not understand what is actually going on or don’t care if the thing is wrong often.
I sometimes use llms, but only to burn thru monkey work that I can fast and easily review and do if the result is too shity. But that is the extention of my ai use.
This is only one study, but I saw an article a few months ago talking about a study by a major phone company that found that the vast majority of people (80% or more IIRC) either didn’t care about AI features on their phones or actively disliked them.
I think most people don’t really care one way or another but hate that it’s being shoved into everything, and those who know the stats on how often it’s wrong are a lot more likely to actively dislike it and be vocal about their dislike.
That sounds quite possible, AI features on phones/OSs go mostly unused –according to my study, which has a sample of size who the hell knows and a methodology of I feel–.
But llms I think, although burning money, are quite accepted by the people who touch them, and do not understand what is actually going on or don’t care if the thing is wrong often.
I sometimes use llms, but only to burn thru monkey work that I can fast and easily review and do if the result is too shity. But that is the extention of my ai use.