• floquant@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 day ago

    Even if it was only live for a couple of days, I wonder how much it inflated the “commits made by Copilot” metric that they will no doubt brag about to their investors

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    28
    ·
    2 days ago

    Microsoft claims to have fixed

    Via their own vibe-coding.

    That supposedly fixed the vibe-coding attribution problem.

    Mhm, yep, seems hunky dory to me!

    • egrets@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      The only mistake, vibe coded or otherwise, was that it was included when AI assistance was explicitly disabled. It’s otherwise entirely deliberate.

      That said, I’m not sure the concept a bad thing overall. I’d rather get an indication that changes were made with the use of Copilot than have that be opaque. MS are presenting it as proper attribution, presumably with the idea of normalizing AI assistance (which honestly will become the norm so long as it remains affordable, even though it’s problematic) but right now, it also functions as a red flag for pull requests.

      • flying_sheep@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        Two mistakes:

        1. The all setting makes little sense, unless someone wants to enforce a zero-AI policy. It shouldn’t have been the default. In-line completions don’t justify attribution, so the chatAndAgents setting makes more sense (there can be more arguments made about uncopyrightable LLM output and the fact that “creation height” can’t be automatically determined)
        2. The code is bugged and attributed the contributions of other LLMs to copilot

        The one you cited is more of a safety measure against the intersection of these two issues: if the code would work correctly, it wouldn’t add the copilot line anyway.

        • egrets@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Yes, fair - your second observation isn’t mentioned directly in the comment I linked – just my point plus your first point – but it is admitted explicitly in this follow-up post.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        17 hours ago

        I’m gonna try to say this gently:

        Microsoft… is gone now.

        They contracted terminal corporate dementia.

        They’re not going to be the same anymore.

        … I used to work for them.

        Something like was inevitable, given their highly cliquey and authoritarian corporate culture.

        They’ve imploded under the weight of around two decades of accumulating technical debt, around two decades of the guys and gals that huffed the most farts getting the most promoted.

        They are now primarily a member of the military industrial complex.

        • egrets@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          I agree that they’re floundering, and that they’re desperately trying to dig themselves out of a hole (if you’ll forgive the mixed metaphor), but I don’t think it’s useful to chalk up to AI mistakes what is actually demonstrably a human marketing decision.

          • sp3ctr4l@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            17 hours ago

            Double post, but:

            I just now realized I fat fingered my own semi-manual autocorrect, and did not originally use ‘They contracted’, which is what I meant.

            I have accordingly here made log of my revision commit, which should now be reflected above.

            Apologies for any confusion this may have created, derp.

          • sp3ctr4l@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            I agree with you… its the people and the way they’ve basically conditioned themselves to act, not the LLM.

            Also, to further confuse the metaphor:

            You can’t dig your way out of a hole that you flooded, doesn’t matter how hard you pedal your mind bicycle.

  • vithigar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    This happened to me on a nextra docs sure I use for one of my projects. Copilot added its attribution to a two line commit that added two items to a page’s _meta.json file. I was baffled as to what copilot could even had potentially have done to help.

  • luciferofastora@feddit.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    One particularly nasty example is when a dev “deleted Copilot’s generated English commit message and manually wrote [their] own commit message instead. However, after the commit was created, the final Git history still contained the Copilot co-author line.”

    While I can see an argument that using it in development should see your code marked as AI-assisted, it wouldn’t even hold in this case: “Copilot only generated a commit message suggestion; it did not author the code”, and even that suggestion was rejected in favour of manual work.

    It sneakily messed with the commit, not just without explicit consent but despite the user’s explicit dissent. That’s not even an opt-in/opt-out discussion at that point, if you don’t get an option.

    Now I wonder whether that could happen even if you don’t have Copilot at all (or rather no license, no matter how much the AI-postles at work have been trying to sell me one). Intuitively, it shouldn’t, but who knows…