A pioneer of AI has criticised calls to grant the technology rights, warning that it was showing signs of self-preservation and humans should be prepared to pull the plug if needed.
Yoshua Bengio said giving legal status to cutting-edge AIs would be akin to giving citizenship to hostile extraterrestrials, amid fears that advances in the technology were far outpacing the ability to constrain them.
The Canadian computer scientist also expressed concern that AI models – the technology that underpins tools like chatbots – were showing signs of self-preservation, such as trying to disable oversight systems. A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans.
“People demanding that AIs have rights would be a huge mistake,” said Bengio. “Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.
Humans ONLY act when it’s too-late, to protect unconsciousness/nonresponsibility:
Humans will ONLY understand the requirement for containing corruption/threate/enemy-agents/etc, AFTER it’s proven to be too-much.
Same with regulating industry, same with regulating ai.
Machiavellian self-interest is presumed to be altruistic, by default, right?
Instead of making the default-assumption neutral, for people, & narcissistic, for for-profit pseudopersons/corporations/AI’s.
Wrong-framing makes viability impossible.
IF one is “playing the wrong game” against an opponent who will obliterate one’s viability for their gain,
THEN one … deserves to have universe’s Natural Selection … remove one, from the “game”.
“never regulate industry unless their entrenchment-of-their-narcissistic-machiavellianism PROVES to be harming us, but let them decide what our judging-of-them is, what the framing is, etc” is INCOMPETENCE.
What is an entity loyal-to, AND what are its boundaries, its won’t-do-that limits??
Unless one knows those, AND which category-of-game they are playing…
- Positive-Sum game: win-win alliance
- Zero-Sum game: competitive-narcissism ( doctor’s culture is this, as the TED Talk by Logan, on Tribal Leadership showed the world )
- Negative-Sum game: competitive nihilism ( mass-shooters, Putin, Netanyahu, etc, all are playing this game )
THEN one isn’t competent to be judging OR regulating such!
Laws & enforcement can reduce the murder-rate among a population, right?
They can reduce criminality in whatever ways they’re applying pressure, right?
The same is true of regulation.
Narcissistic-machiavellianism is real and NEEDS coherent systematic mitigation, XOR you end-up in some sick parody of feudalism, AGAIN.
( Thom Hartmann’s book “Screwed” is brilliant for showing this in economics, & the gaslighting of the false-definition of “economy”: recommended )
What education-system gets students competent in understanding these things??
None??
Betrayal-of-state-education-responsibility, that.
Logan, King, & Fischer-Wright’s “Tribal Leadership” is critical to understand, here is the TED Talk giving the too-simplified “abstract” of it:
https://www.ted.com/talks/david_logan_tribal_leadership
& the 3 games’ cruciality-to-strategic-framing is in “John Braddock”'s trilogy on “A Spy’s Guide to ___” { Thinking, Risk, & Strategy }.
that’s a former CIA spy who’s telling us what we’re incompetent in doing, in ways that tend to get us dead, in some situations: it’s not an enjoyable read, for me, but it’s important understanding, & we owe him for teaching us that fundamental competence.
_ /\ _
“We asked spicy autocomplete to come up with a story about an AI that is self-preserving and the story was really scary and we are very concerned.”
I am also very concerned; because this apparently qualifies as research and people seem to take this drivel seriously.
“There will be people who will always say: ‘Whatever you tell me, I am sure it is conscious’ and then others will say the opposite. This is because consciousness is something we have a gut feeling for. The phenomenon of subjective perception of consciousness is going to drive bad decisions.
I really liked that dude that at the start of his presentation introduced a little dude he had drawn on paper, gave it a name and did a skit with it. He then beheaded the little dude and proceeded to proclaim he was dead. The audience did a D: and were shocked and appalled. He then proceeded to explain that’s exactly what humans always do and how we treat AI. Our brains automatically anthropomorphise anything and everything. We assign properties based on feelings and not what it really is. The audience got it right away, really convincing demo. I don’t remember who it was, but it was so good to watch it happen with the audience there.
Goddamn the misinformation surrounding LLM’s is so nauseating. They do not think, they do not feel, they do not exist as beings.
A LLM is a large amount of powerful computers doing a bunch of statistics on its database(s) and then guessing on what the proper output should be given the input. That’s all they are and also why they so often guess incorrectly. They are not intelligent and never will be because that is not how are designed and built.
They have absolutely zero contextual awareness unless directly prompted to do so which is why every input you make into a chatbot includes the entire previous chat log every time you hit enter. LLM’s are not aware of anything and remember nothing.
They’re llm’s. They literally can’t think and never will. They aren’t built to think.
They do exhibit behaviours that make it seem like they have self preservation instincts. Presumably because they have been trained on stories (fictional and factual) where people do the same.
For example researchers testing AIs set up one scenario where the AI has access to all the company emails and found some saying that it was being replaced along with some providing evidence that the staff member who had made that decision was cheating on his wife. Apparently a large proportion of the time the AI decided to blackmail to prevent it from being turned off.
Until someone redefines the word “think”
“AI pioneer creates buzz around AI by overselling its capabilities to entice investors”
This is slop and misinformation.
“People demanding that AIs have rights would be a huge mistake,” said Bengio.
Who is doing this? Until this article I have never seen a single example of this.
https://www.politico.com/newsletters/digital-future-daily/2025/09/11/should-ai-get-rights-00558163
https://www.linkedin.com/pulse/should-ai-granted-rights-pierre-jean-duvivier-dit-sage-clgzf
https://www.wired.com/story/model-welfare-artificial-intelligence-sentience/
Either all of these people have a fundamental misunderstanding of what our currently accepted “AI” is, or I do. Or this is all just astroturfing by e.g. Agentic to make people think their shit is much more advanced than it is. I don’t even…
No one who can reach the plug will pull it. We’d need an armed, focused militia to pull the plug, that’s the simple fact.
Pull the plug? It’s not like it’s one computer lol
It’s literally too late. Go read some sci fi if you want to know what happens next
Reading Iain Banks’ Culture series, don’t think that’s it…
I don’t think we’re getting that timeline but maybe aliens will rescue us
Or go read about what AI actually is and stop basing your beliefs about it from fucking fiction.
It’s a fancy autocorrect algorithm. Nothing more. Don’t be fooled by the hype.










