I think if a person were asked to do the same they would actually look at the image and make genuine remarks, look at the points it has highlighted, the boxes are placed around random points and the references to those boxes are unrelated (ie. yellow talks about branches when there are no branches near the yellow box, red talks about bent guardrail when the red box on the guardrail is of an undamaged section)
It has just made up points that “sound correct”, anyone actually looking at this can tell there is no intelligence behind this
Yet that wasn’t the point they even made! Lmfao nice reaching there.
Those would be the same type of points a human would make to accomplish the task.
You seem to be ignoring the facts. It was told the image was fake, and told to explain why. Even a human that knows it’s real would still do what was presented to it.
The person told the ai a very specific thing to do, with not room for variance, it wasn’t even stated as a question, they made a demand and any human in the same position would act the same way. If you’re expecting to have to tell a human a 100 times that “yes the image is real, can you do the task presented” is more efficient and better then it being done?
Now you could also present the task as both being able to question it, the ai would follow instructions better.
Back to situation one, while with the human you would be constantly interrupted, is that a good employee or subject? Or one you would immediately replace as it can’t even follow basic instructions? Ai or human, you would point to do the task at hand, yes critical thinking is important, but not for this stupid task. Stop applying instructions and context that never existed in the first place. In a one for one example, the Ai would question too, if you can’t understand this, you shouldn’t be commenting on Ai.
Ai sucks, but don’t ignore reality to make your asinine point.
Why would it have to? It and the person doing the task already knows to do any task put in front of it. It’s one of a hundred photos for all it and the person knows.
You are extending context and instructions that doesn’t exist. The situation would be, both are doing whatever task is presented to them. A human asking would fail and be removed. They failed order number one.
You could also setup a situation where the ai and human were both capable of asking. The ai won’t do what it’s not asked, that’s the comprehension lacking.
When we are talking about LLM chat bots, they have a conversational interface. I am not talking about other types of machine learning. I don’t have time to keep responding.
. I am not talking about other types of machine learning.
Then you are making up you own conversation instead of following the thread?
The person presented a specific task to an AI, where does a chatbot come in? You seem to be confused about what Ai is, and that’s what I pointed out, thanks for making it clear.
clearly, they asked it a question that average joe would do, and has shown that again its full of overly confident lies. it did not just reinforce the original belief of the user that it is fake, but it also hallucinated there a bunch of professional-like statements that are false if you take the time to check them. most people won’t check them though, and straight up believe what it just spit out and think “oh this is so smart! outrageous that people call me dumb for asking it life advice!”
Wait, you’re surprised it did what you asked of it?
There’s a massive difference between asking if something is fake, and telling it it is and asking why.
A person would make the same type of guesses and explanations if given the same task.
All this is showing is, you and ALOT of other people just don’t know enough about AI to be able to have a conversation about it.
It even says “suggests” in it, it’s making no claim that it’s real or fake. The lack of basic comprehension is the issue here.
I think if a person were asked to do the same they would actually look at the image and make genuine remarks, look at the points it has highlighted, the boxes are placed around random points and the references to those boxes are unrelated (ie. yellow talks about branches when there are no branches near the yellow box, red talks about bent guardrail when the red box on the guardrail is of an undamaged section)
It has just made up points that “sound correct”, anyone actually looking at this can tell there is no intelligence behind this
Yet that wasn’t the point they even made! Lmfao nice reaching there.
Those would be the same type of points a human would make to accomplish the task.
You seem to be ignoring the facts. It was told the image was fake, and told to explain why. Even a human that knows it’s real would still do what was presented to it.
The person told the ai a very specific thing to do, with not room for variance, it wasn’t even stated as a question, they made a demand and any human in the same position would act the same way. If you’re expecting to have to tell a human a 100 times that “yes the image is real, can you do the task presented” is more efficient and better then it being done?
Now you could also present the task as both being able to question it, the ai would follow instructions better.
Back to situation one, while with the human you would be constantly interrupted, is that a good employee or subject? Or one you would immediately replace as it can’t even follow basic instructions? Ai or human, you would point to do the task at hand, yes critical thinking is important, but not for this stupid task. Stop applying instructions and context that never existed in the first place. In a one for one example, the Ai would question too, if you can’t understand this, you shouldn’t be commenting on Ai.
Ai sucks, but don’t ignore reality to make your asinine point.
A person would have the agency to ask, " why do you think it’s fake?"
Why would it have to? It and the person doing the task already knows to do any task put in front of it. It’s one of a hundred photos for all it and the person knows.
You are extending context and instructions that doesn’t exist. The situation would be, both are doing whatever task is presented to them. A human asking would fail and be removed. They failed order number one.
You could also setup a situation where the ai and human were both capable of asking. The ai won’t do what it’s not asked, that’s the comprehension lacking.
When people use a conversational tool, they expect it to act human, which it INTENTIONALLY DOES but without the sanity of a real human.
It’s not a conversation tool when you present it with a specific task….
Do you not understand even the basic premise of how ai works?
When we are talking about LLM chat bots, they have a conversational interface. I am not talking about other types of machine learning. I don’t have time to keep responding.
Then you are making up you own conversation instead of following the thread?
The person presented a specific task to an AI, where does a chatbot come in? You seem to be confused about what Ai is, and that’s what I pointed out, thanks for making it clear.
No. Stop making things up to complain about. Or at least leave me out of it.
Then what are doing? Complaining it did exactly what you instructed it to do?
What else did you expect?
I get circle jerking against ai is hip and fun, but this isn’t even one of the valid errors it makes. This is just pure human error lmfao.
clearly, they asked it a question that average joe would do, and has shown that again its full of overly confident lies. it did not just reinforce the original belief of the user that it is fake, but it also hallucinated there a bunch of professional-like statements that are false if you take the time to check them. most people won’t check them though, and straight up believe what it just spit out and think “oh this is so smart! outrageous that people call me dumb for asking it life advice!”
But they didn’t ask it a question… They specifically told it the image was fake and explain why. That’s not a question, that’s a task.
Clearly (as you so incorrectly pointed out a question….)The lack of basic reading comprehension being shown here exactly explains the issue perfectly.
It’s not people relying on it, it’s people using it for stuff it’s not meant for!