I would love to read an actually serious treatment of this issue and not 4 paragraphs that just say the headline but with more words.
I write about technology at theluddite.org
I would love to read an actually serious treatment of this issue and not 4 paragraphs that just say the headline but with more words.
I have been predicting for well over a year now that they will both die before the election, but after the primaries, such that we can’t change the ballots, and when Americans go to vote, we will vote between two dead guys. Everyone always asks “I wonder what happens then,” and while I’m sure that there’s a technical legal answer to that question, the real answer is that no one knows,
I know that this kind of actually critical perspective isn’t point of this article, but software always reflects the ideology of the power structure in which it was built. I actually covered something very similar in my most recent post, where I applied Philip Agre’s analysis of the so-called Internet Revolution to the AI hype, but you can find many similar analyses all over STS literature, or throughout just Agre’s work, which really ought to be required reading for anyone in software.
edit to add some recommendations: If you think of yourself as a tech person, and don’t necessarily get or enjoy the humanities (for lack of a better word), I recommend starting here, where Agre discusses his own “critical awakening.”
As an AI practitioner already well immersed in the literature, I had incorporated the field’s taste for technical formalization so thoroughly into my own cognitive style that I literally could not read the literatures of nontechnical fields at anything beyond a popular level. The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial – except that it reproduced the same technical schemata as the AI literature. I believe that this problem was not simply my own – that it is characteristic of AI in general (and, no doubt, other technical fields as well). T
I’ve now read several of these from wheresyoured.at, and I find them to be well-researched, well-written, very dramatic (if a little ranty), but ultimately stopping short of any structural or theoretical insight. It’s right and good to document the shady people inside these shady companies ruining things, but they are symptoms. They are people exploiting structural problems, not the root cause of our problems. The site’s perspective feels like that of someone who had a good career in tech that started before, say, 2014, and is angry at the people who are taking it too far, killing the party for everyone. I’m not saying that there’s anything inherently wrong with that perspective, but it’s certainly a very specific one, and one that I don’t particularly care for.
Even “the rot economy,” which seems to be their big theoretical underpinning, has this problem. It puts at its center the agency of bad actors in venture capital becoming overly-obsessed with growth. I agree with the discussion about the fallout from that, but it’s just lacking in a theory beyond “there are some shitty people being shitty.”
I wish we had less selection, in general. My family lives in Spain, and I’ve also lived in France. This is just my observation, but American grocery stores clearly emphasize always having a consistent variety, whereas my Spanish family expects to eat higher quality produce seasonally. I suspect that this is a symptom of a wider problem, not the cause, but American groceries are just fucking awful by comparison, and so much more expensive too.
Excellent thank you very much for this.
So true! I hereby retract that antizombo slander
I like single purpose concept websites that don’t do anything. They’re the opposite of the modern internet that values engagement above all. They communicate exactly one thing once and though you never have to go back, you’re always glad that they’re there.
I’ve already posted this here, but it’s just perennially relevant: The Anti-Labor Propaganda Masquerading as Science.
It’s not a research paper; it’s a report. They’re not researchers; they’re analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word “research” for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI “research” that’s just them poking at their own product but dressed up in a science-lookin’ paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I’ve written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would’ve noticed that it’s actually junk science.