Ok, technically you are correct. Still they are lies or let’s call it disinformation or propaganda.
Wether the output is controlled by the machine it self having a mind (which of course is sci-fi) or by those who control the machine.
What you’re calling lies are false positives. To lie you have to know the truth. AI’s are ignorant. They don’t know what anything is, as all they “know” is mathematical patterns in 1’s and 0’s.
They would only be lies if Google engineers explicitly overrided the model to output the false information. What most implementations of LLM’s are is weaponized incompetence, for-profit. Capitalists know they output false information, and they don’t care, because their only goal is profit and power.
If it’s shown to the newspaper that they are lies and they keep on printing them, then yes I do call them liars as well. Whatever you want to call it, you must admit they are culpable for spreading disinformation.
No, you are proving my point here. You say ‘they’ as in the publishers/owners/printers of the newspaper. You don’t blame ‘it’ the literal, physical piece of paper you are holding in your hands.
In the same way that you would not say a clock was lying to you if it displays the wrong time.
The point is, the LLM is not ‘lying’ to you. It’s showing you information. It doesn’t ‘know’ whether the information is true or not. It also doesn’t ‘care’. Because it is a statistical model and is incapable of those things. And if you scroll back to my initial point, I said “technically, it’s not lying, because lying requires intent to deceive, and LLMs don’t have intent”
Ok, technically you are correct. Still they are lies or let’s call it disinformation or propaganda. Wether the output is controlled by the machine it self having a mind (which of course is sci-fi) or by those who control the machine.
What you’re calling lies are false positives. To lie you have to know the truth. AI’s are ignorant. They don’t know what anything is, as all they “know” is mathematical patterns in 1’s and 0’s.
They would only be lies if Google engineers explicitly overrided the model to output the false information. What most implementations of LLM’s are is weaponized incompetence, for-profit. Capitalists know they output false information, and they don’t care, because their only goal is profit and power.
If Google knows it outputs falsehoods and lets it continue it becomes purposeful. That makes them lies in my book.
If a newspaper prints lies you don’t say the physical piece of pulped up tree you are holding is lying to you, you say the author is.
If it’s shown to the newspaper that they are lies and they keep on printing them, then yes I do call them liars as well. Whatever you want to call it, you must admit they are culpable for spreading disinformation.
No, you are proving my point here. You say ‘they’ as in the publishers/owners/printers of the newspaper. You don’t blame ‘it’ the literal, physical piece of paper you are holding in your hands.
In the same way that you would not say a clock was lying to you if it displays the wrong time.
OK, so I don’t blame the GPUs crunching out the LLM lies, or the HTML on the page, I blame Google the company that programmed them.
The point is, the LLM is not ‘lying’ to you. It’s showing you information. It doesn’t ‘know’ whether the information is true or not. It also doesn’t ‘care’. Because it is a statistical model and is incapable of those things. And if you scroll back to my initial point, I said “technically, it’s not lying, because lying requires intent to deceive, and LLMs don’t have intent”
What’s the point of making this semantic difference though?