It’s known that AI companies will harvest content without care for its veracity and train LLMs on it. These LLMs will then regurgitate that content as fact.
This isn’t a particularly novel finding but the experiment illustrates it rather well.
The researchers you consider to have acted so immorally did add useless information to the knowledge pool – but it was unadvertised, immediately recognizable useless information that any sane reviewer would’ve flagged. They included subtle clues like thanking someone at Starfleet Academy for letting them use a lab aboard the USS Enterprise. They claimed to have gotten funding from the Sideshow Bob Foundation. Subtle.
By providing this easily traceable nonsense, they were able to turn the generally-but-informally known understanding that LLMs will repeat bullshit into a hard scientific data point that others can build on. Nothing world-changing but still valuable. They basically did what Alan Sokal did.
Instead of worrying about this experiment you should worry about all the misinformation in LLMs that wasn’t provided (and diligently documented) by well-meaning researchers.









That has happened to me… twice. Once they sent spam to abuse@<domain> and once to postmaster@<domain>. Both of those are “well-known” addresses that received one spam mail each.
Having your own domain with a catch-all address is rare enough that spammers don’t seem to try to target it.
Meanwhile I set up straight-to-spam rules for a handful of companies that leaked my email address. Very useful.