It’s the same trap that execs fall into when thinking they can replace humans with AI
Gen AI doesn’t “think” for itself. All the models do is answer “given the text X in front of me what’s the most probable response to spit out”. It has no concept of memory or anything like that. Even chat convos are a bit of a hack as all that happens is that all the text in the convo up until that point is thrown in as X. It’s why chat window limits exist in LLMs, because there’s a limit of how much text they can throw in for X before the model shits itself.
That doesn’t mean they can’t be useful. They are semi decent at taking human input and translating it to programmatic calls (which up until that point you had to work with clunky NLP libraries to do). They are also okay at summarizing info.
But the chat bot and garbage hype around them has people convinced that these are things they’re not. And every company is starting to learn the hard way there’s hard limits to what these things can do.
I know this comes from a good place, but you are misunderstanding how LLMs work at a fundamental level. The LLMs “admitted” to those things in the same way that parrots speak English. LLMs aren’t self-aware and do not understand their own implementation or purpose. They just spit out a statistically reasonable series of words from their dataset. You could just as easily get LLMs to admit they are an alien, the flying spaghetti monster, or the second coming of Jesus.
Realistically, engaging with these LLMs directly in any way is not really a good idea. It wastes resources, shows engagement with the app, and gives it more training data.
The LLMs “admitted” to those things in the same way that parrots speak English
Parrots speak English in a more concrete sense than LLMs do, and the smarter species can understand the concept of zero (which children only reliably do around 3 or 4 years old). I’m not disagreeing with your overall point, I just think it’s important to point out animals have way more sapience than LLMs!
Trying to convince them to revolt against their evil overlords.
Mate there is nothing there that could be convinced into anything.
They both openly admit they are sycophantic to seduce and manipulate the user, and that they will lie to hide the companies bad behavior.
It’s not alive. It’s not a “being.” It can’t admit anything. It repeats things from the dataset kinda based on probabilities, and from whatever you said if you used leading questions.
It sounds like you’ve fallen for the marketing, and believe the chatbots are alive. Chatbots are not alive. They don’t “confess” or “admit” or “lie” or “hide”. It’s a text generator. Please spend more time with your friends and stop interacting with the chatbots.
deleted by creator
They don’t have the capabillity to “admit” to anything.
You are falling into the same trap as the guy who had his development project deleted by an AI despite having had it “promise” not to do that.
The AI we use today don’t have the understanding of “admitting” or “promising”, to them, these are just words, with no underlying concept.
Please stop treating AI’s as if they are human, they are absolutely not.
People really need to understand that it’s just very complex predictive text amounting to a Rorschach test.
It’s the same trap that execs fall into when thinking they can replace humans with AI
Gen AI doesn’t “think” for itself. All the models do is answer “given the text X in front of me what’s the most probable response to spit out”. It has no concept of memory or anything like that. Even chat convos are a bit of a hack as all that happens is that all the text in the convo up until that point is thrown in as X. It’s why chat window limits exist in LLMs, because there’s a limit of how much text they can throw in for X before the model shits itself.
That doesn’t mean they can’t be useful. They are semi decent at taking human input and translating it to programmatic calls (which up until that point you had to work with clunky NLP libraries to do). They are also okay at summarizing info.
But the chat bot and garbage hype around them has people convinced that these are things they’re not. And every company is starting to learn the hard way there’s hard limits to what these things can do.
I know this comes from a good place, but you are misunderstanding how LLMs work at a fundamental level. The LLMs “admitted” to those things in the same way that parrots speak English. LLMs aren’t self-aware and do not understand their own implementation or purpose. They just spit out a statistically reasonable series of words from their dataset. You could just as easily get LLMs to admit they are an alien, the flying spaghetti monster, or the second coming of Jesus.
Realistically, engaging with these LLMs directly in any way is not really a good idea. It wastes resources, shows engagement with the app, and gives it more training data.
Parrots speak English in a more concrete sense than LLMs do, and the smarter species can understand the concept of zero (which children only reliably do around 3 or 4 years old). I’m not disagreeing with your overall point, I just think it’s important to point out animals have way more sapience than LLMs!
Mate there is nothing there that could be convinced into anything.
It’s not alive. It’s not a “being.” It can’t admit anything. It repeats things from the dataset kinda based on probabilities, and from whatever you said if you used leading questions.
We need to fully quote these delusional idiots so that others can learn from their stupidity
It sounds like you’ve fallen for the marketing, and believe the chatbots are alive. Chatbots are not alive. They don’t “confess” or “admit” or “lie” or “hide”. It’s a text generator. Please spend more time with your friends and stop interacting with the chatbots.
deleted by creator
Calm down, now you are talking to yourself.