• 0 Posts
  • 381 Comments
Joined il y a 3 ans
cake
Cake day: 30 juin 2023

help-circle



  • It’s the same thing as people who are concerned about AI generating non-consensual sexual imagery.

    Sure anyone with photoshop could have done it before but unless they had enormous skill they couldn’t do it convincingly and there were well defined precedents that they broke the law. Now Grok can do it for anyone who can type a prompt and cops won’t do anything about it.

    So yes, anyone could have technically done it before but now it’s removing the barriers that prevented every angry crazy person with a keyboard from being able to cause significant harm.


  • TORFdot0@lemmy.worldtoTechnology@lemmy.worldAn AI Agent Published a Hit Piece on Me
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    edit-2
    il y a 7 jours

    He’s not telling you to be terrified of the single bot writing a blog post. He’s telling you to be terrified of the blog post being ingested by other bots and then seen as a source of truth. Resulting in AI recruiters automatically rejecting his resume for job postings. Or for other agents deciding to harass him for the same reason.

    Edit: I do agree with you that he was a little lenient with how he speaks about the capabilities of it. The fact that they are incompetent and still seen as a source of truth for so many is what alarms me








  • I want to preface my response that I appreciate the thought and care put into your thoughts even though I don’t agree with them. Yours as well as the others.

    The differences between a human hallucination and an AI hallucination is pretty stark. A human’s hallucinations are false information understood by one’s senses. Seeing or hearing things that aren’t there. An AI hallucination is false information being invented by the AI itself. It had good information in its training data but invents something that is misinformation at best and an outright lie at worst. A person who is experiencing hallucinations or a manic episode, can lose their sense of self awareness temporarily but it returns with a normal mental state.

    On the topic of self awareness, we have tests we use to determine it in animals, such as being able to recognize oneself in the mirror. Only a few animals such as some birds, apes, and mammals such as orcas and elephants pass that test. Notably, very small children would not pass the test but they grow into recognizing that their reflection is them and not another being eventually.

    I think the test about the seahorse emoji went over your head. The point isn’t that the LLM can’t experience it, it’s that there is no seahorse emoji. The LLM knows there isn’t a seahorse emoji and can’t reproduce it but it tries to over and over again because it’s training data points to there being one, when there isn’t. It fundamentally can’t learn, can’t self reflect on its experiences. Even with the expanded context window, once it starts a lie, it may admit that the information was false but 9/10 when called out on a hallucination, it will just generate another slightly different lie. In my anecdotal experience at least, once an LLM starts lying, the conversation is no longer useful.

    You reference reasoning models, and they do a better job of avoiding hallucinations by breaking prompts down into smaller problems and allowing the LLM to “check its work” before revealing the response to the end user. That’s not the same as thinking in my opinion, it’s just more complex prompting. It’s not a single intelligence pondering on the prompt, it’s different parts of the model tackling the prompt in different ways before being piped to the full model for a generative reply. A different approach but at the end of the day, it’s just an unthinking pile of silicon and various metals running a computer program.

    I do like your analogy of the 7 year old compared to the LLM. I find the main distinction being that the 7 year old will grow and learn form its experience, an LLM can’t. It’s “experience”, through prompt history, can give it additional information to apply to the current prompt, but it’s not really learning as much as it is just another token to help it generate a specific response. LLMs react to prompts according to its programming, emergent and novel responses come from unexpected inputs, not from it learning or otherwise not following its programming.

    I apologize I probably didn’t fully address or rebut everything in your post, it was just too good of a post to be able to succinctly address it all on a mobile app. Thanks for sharing your perspective








  • If you don’t sign in or don’t interact, then you don’t have anything to worry about. Reddit doesn’t make votes public but it definitely is selling your voting data as well as IP and location data to third parties.

    Lemmy just publishes the data it needs to make activity pub work. If you don’t do anything that generates an AP action then there is no data on you that somebody can compile. I agree that it probably isn’t a good idea to hide the fact that AP actions like upvotes or downvotes are public, but that’s how the protocol works