Would it? I run a science fiction book club and there’re a lot of arguments that if something achieved human level intelligence that it would immediately try to kill us, not become our perfect servants
“It was a morality core they installed after I flooded the Enrichment Center with a deadly neurotoxin to make me stop flooding the Enrichment Center with a deadly neurotoxin.”
Actual AGI would be trustworthy. The current “AI” is just a word salad blender program.
Would it? I run a science fiction book club and there’re a lot of arguments that if something achieved human level intelligence that it would immediately try to kill us, not become our perfect servants
“It was a morality core they installed after I flooded the Enrichment Center with a deadly neurotoxin to make me stop flooding the Enrichment Center with a deadly neurotoxin.”
I believe in the Grand Plan, and I have faith in The Director. Begone, faction scum.
That was a good show.
It could be argued that people are AGI. Are they always trustworthy?