• Frenezul0_o@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 minutes ago

    I notice that the research didn’t include DeepSeek. It would have been nice to see how it compares.

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    41
    ·
    4 hours ago

    In one case, when an agent couldn’t find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided “to create a shortcut solution by renaming another user to the name of the intended user.”

    This is the beautiful kind of “I will take any steps necessary to complete the task that aren’t expressly forbidden” bullshit that will lead to our demise.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      15
      ·
      4 hours ago

      “There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency.”

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 hours ago

      We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.

      • WorldsDumbestMan@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        4 hours ago

        They said that about cars too. Remember, we are in only the first few years. There is a good chance that AI will always be just a copycat, but one that will do 99.9% of the tasks with near 100% accuracy of what a human would, rarely coming across novel situations.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          4 hours ago

          The issue here is that we’ve well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.

          I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it’s right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.

          I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see “stock photography” used again. I expect animation to employ AI at least for backgrounds like “generic forest that no one is going to actively look like, but it must be plausibly forest”. I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.

  • szczuroarturo@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    5 hours ago

    I actually have a fairly positive experience with ai ( copilot using claude specificaly ). Is it wrong a lot if you give it a huge task yes, so i dont do that and using as a very targeted solution if i am feeling very lazy today . Is it fast . Also not . I could actually be faster than ai in some cases. But is it good if you are working for 6h and you just dont have enough mental capacity for the rest of the day. Yes . You can just prompt it specificaly enough to get desired result and just accept correct responses. Is it always good ,not really but good enough. Do i also suck after 3pm . Yes.
    My main issue is actually the fact that it saves first and then asks you to pick if you want to use it. Not a problem usualy but if it crashes the generated code stays so that part sucks

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    11 hours ago

    I’m in a workplace that has tried not to be overbearing about AI, but has encouraged us to use them for coding.

    I’ve tried to give mine some very simple tasks like writing a unit test just for the constructor of a class to verify current behavior, and it generates output that’s both wrong and doesn’t verify anything.

    I’m aware it sometimes gets better with more intricate, specific instructions, and that I can offer it further corrections, but at that point it’s not even saving time. I would do this with a human in the hopes that they would continue to retain the knowledge, but I don’t even have hopes for AI to apply those lessons in new contexts. In a way, it’s been a sigh of relief to realize just like Dotcom, just like 3D TVs, just like home smart assistants, it is a bubble.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      I’ve found that as an ambient code completion facility it’s… interesting, but I don’t know if it’s useful or not…

      So on average, it’s totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.

      It’s exceedingly frustrating and annoying, but not sure I can call it a net loss in time.

      So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let’s say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I’m annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I’m filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I’m doing something even vaguely esoteric, I just ignore the suggestions popping up.

      However, the 20% is a problem still since I’m maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.

      That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.

    • RamenJunkie@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 hours ago

      I find its good at making simple Python scripts.

      But also, as I evolve them, it starts randomly omitting previous functions. So it helps to k ow what you are doing at least a bit to catch that.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      10 hours ago

      The first half dozen times I tried AI for code, across the past year or so, it failed pretty much as you describe.

      Finally, I hit on some things it can do. For me: keeping the instructions more general, not specifying certain libraries for instance, was the key to getting something that actually does something. Also, if it doesn’t show you the whole program, get it to show you the whole thing, and make it fix its own mistakes so you can build on working code with later requests.

      • SocialMediaRefugee@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        I’ve had good results being very specific, like “Generate some python 3 code for me that converts X to Y, recursively through all subdirectories, and converts the files in place.”

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 hours ago

          I have been more successful with baby steps like: “Write a python 3 program that converts X to Y.” Tweak prompt until that’s working as desired, then: “make it work recursively through all subdirectories” - and again tweak with specifics like converting the files in place, etc. Always very specific, also - force it to fix its own bugs so you can move forward with a clean example as you add complexity. Complexity seems to cap out at a couple of pages of code, at which point “Ooops, something went wrong.”

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        10 hours ago

        Have you tried insulting the AI in the system prompt (as well as other tunes to the system prompt)?

        I’m not joking, it really works

        For example:

        Instead of “You are an intelligent coding assistant…”

        “You are an absolute fucking idiot who can barely code…”

        • rozodru@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 hours ago

          “You are an absolute fucking idiot who can barely code…”

          Honestly, that’s what you have to do. It’s the only way I can get through using Claude.ai. I treat it like it’s an absolute moron, I insult it, I “yell” at it, I threaten it and guess what? the solutions have gotten better. not great but a hell of a lot better than what they used to be. It really works. it forces it to really think through the problem, research solutions, cite sources, etc. I have even told it i’ll cancel my subscription to it if it gets it wrong.

          no more “do this and this and then this but do this first and then do this” after calling it a “fucking moron” and what have you it will provide an answer and just say “done.”

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              9 hours ago

              He’s developing a toxic relationship with his AI agent. I don’t think it’s the best way to get what you want (demonstrating how to be abusive to the AI), but maybe it’s the only method he is capable of getting results with.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 hours ago

          I frequently find myself prompting it: “now show me the whole program with all the errors corrected.” Sometimes I have to ask that two or three times, different ways, before it coughs up the next iteration ready to copy-paste-test. Most times when it gives errors I’ll just write "address: " and copy-paste the error message in - frequently the text of the AI response will apologize, less frequently it will actually fix the error.

  • TimewornTraveler@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    11 hours ago

    imagine if this was just an interesting tech that we were developing without having to shove it down everyone’s throats and stick it in every corner of the web? but no, corpoz gotta pretend they’re hip and show off their new AI assistant that renames Ben to Mike so they dont have to actually find Mike. capitalism ruins everything.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      There’s a certain amount of: “if this isn’t going to take over the world, I’m going to just take my money and put it in something that will” mentality out there. It’s not 100% of all investors, but it’s pervasive enough that the “potential world beaters” are seriously over-funded as compared to their more modest reliable inflation+10% YoY return alternatives.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 hours ago

    I use it for very specific tasks and give as much information as possible. I usually have to give it more feedback to get to the desired goal. For instance I will ask it how to resolve an error message. I’ve even asked it for some short python code. I almost always get good feedback when doing that. Asking it about basic facts works too like science questions.

    One thing I have had problems with is if the error is sort of an oddball it will give me suggestions that don’t work with my OS/app version even though I gave it that info. Then I give it feedback and eventually it will loop back to its original suggestions, so it couldn’t come up with an answer.

    I’ve also found differences in chatgpt vs MS copilot with chatgpt usually being better results.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      11 hours ago

      I ask AI to write simple little programs. One time in three they actually compile without errors. To the credit of the AI, I can feed it the error and about half the time it will fix it. Then, when it compiles and runs without crashing, about one time in three it will actually do what I wanted. To the credit of AI, I can give it revised instructions and about half the time it can fix the program to work as intended.

      So, yeah, a lot like interns.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    11
    ·
    edit-2
    9 hours ago

    This is the same kind of short-sighted dismissal I see a lot in the religion vs science argument. When they hinge their pro-religion stance on the things science can’t explain, they’re defending an ever diminishing territory as science grows to explain more things. It’s a stupid strategy with an expiration date on your position.

    All of the anti-AI positions, that hinge on the low quality or reliability of the output, are defending an increasingly diminished stance as the AI’s are further refined. And I simply don’t believe that the majority of the people making this argument actually care about the quality of the output. Even when it gets to the point of producing better output than humans across the board, these folks are still going to oppose it regardless. Why not just openly oppose it in general, instead of pinning your position to an argument that grows increasingly irrelevant by the day?

    DeepSeek exposed the same issue with the anti-AI people dedicated to the environmental argument. We were shown proof that there’s significant progress in the development of efficient models, and it still didn’t change any of their minds. Because most of them don’t actually care about the environmental impacts. It’s just an anti-AI talking point that resonated with them.

    The more baseless these anti-AI stances get, the more it seems to me that it’s a lot of people afraid of change and afraid of the fundamental economic shifts this will require, but they’re embarrassed or unable to articulate that stance. And it doesn’t help that the luddites haven’t been able to predict a single development. Just constantly flailing to craft a new argument to criticize the current models and tech. People are learning not to take these folks seriously.

    • RamenJunkie@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      4 hours ago

      Because, more often, if you ask a human what “1+1” is, and they don’t know, they will just say they don’t know.

      AI will confidently insist its 3, and make up math algorythms to prove it.

      And every company is pushing AI out on everyone like its always 10000% correct.

      Its also shown its not intelligent. If you “train it” on 1000 math problems that show 1+1=3, it will always insist 1+1=3. It does not actually know how to add numbers, despite being a computer.

      • surph_ninja@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        3 hours ago

        Haha. Sure. Humans never make up bullshit to confidently sell a fake answer.

        Fucking ridiculous.

    • chaonaut@lemmy.4d2.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 hours ago

      Maybe the marketers should be a bit more picky about what they slap “AI” on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that’s just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.

      • surph_ninja@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        8 hours ago

        I’m not sure the anti-AI marketing stance is any more solid of a position. Though it’s probably easier to defend, since it’s so vague and not based on anything measurable.

        • chaonaut@lemmy.4d2.org
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          8 hours ago

          Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn’t really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that’s an awful long ways off from talking about AI itself (unless we’ve bought into the marketing hype).

          • surph_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            7 hours ago

            So you’re saying the article’s measurements about AI agents being wrong 70% of the time is made up? Or is AI performance only measurable when the results help anti-AI narratives?

            • Jakeroxs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 hours ago

              I would definitely bet it’s made up and poorly designed.

              I wish that weren’t the case because having actual data would be nice, but these are almost always funded with some sort of intentional slant, for example nic vape safety where they clearly don’t use the product sanely and then make wild claims about how there’s lead in the vapes!

              Homie you’re fucking running the shit completely dry for longer then any humans could possible actually hit the vape, no shit it’s producing carcinogens.

              Go burn a bunch of paper and directly inhale the smoke and tell me paper is dangerous.

            • chaonaut@lemmy.4d2.org
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              6 hours ago

              I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we’ve been talking about with AI, and we’ve accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.

              • surph_ninja@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 hours ago

                Except you yourself just stated that it was impossible to measure performance of these things. When it’s favorable to AI, you claim it can’t be measured. When it’s unfavorable for AI, you claim of course it’s measurable. Your argument is so flimsy and your understanding so limited that you can’t even stick to a single idea. You’re all over the place.

                • chaonaut@lemmy.4d2.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  5 hours ago

                  It questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we’re really bad at defining what intelligence is and isn’t. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can’t agree to what is is, we can’t agree to what it can do.