garbage account

  • Underwaterbob@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    AI is trained on the Internet. Look at the bullshit on the Internet. AI will take some random schmoe’s bullshit opinion and present it as hard fact.

    That, and it just re-introduced the problem of being able to see search results without visiting any of the resultant websites. The last time, sites ended up burying answers down the page to avoid being able to see results in search previews. Making everything shittier. What kind of response is there going to be to AI summaries? Everything will undoubtedly get even shittier as sites try to get people to visit and not just read the AI summary. Hello even more obfuscation. We’re taking the greatest method of spreading information around the globe ever devised and just absolutely filling it to the brim with bullshit.

    • oakward@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      This is only the beginning. Soon there will be LLMs trained on other LLMs garbage. And those LLMs will also post and write crap on the Internet. The true pinnacle of shite posting

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      oh yea it does, i see google summarizing, its just a mash up of different BLOG posts as truths, that isnt a source. its basically asking opinions of the LLM.

  • drhodl@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    2 days ago

    I don’t hate AI. I just hate the untrustworthy rich fucks who are forcing it down everyones throats.

  • Sunflier@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 day ago

    I hate AI because it’s replacing jobs (a.k.a, salaries) without us having a social safety net to make it painless.

    We’ve replaced you with ai

    -CEO

    Ai is replacing most of the jobs, and there isn’t enough open positions to be filled by the now unemployed.

    -Ecconomists

    I need food stamps, medical care, housing assistantance, and unemployment.

    -Me

    No! Get a job you lazy welfare queen!

    -Politicians

    Where? There aren’t any.

    -Me

    Not my problem! Now, excuse me while I funnel more money to my donors.

    -The same politicians

    • finitebanjo@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      The good news is, while automation like robot arms is continuing to replace humans, the AI aspect of it has been catastrophic and early adopters are often seen expressing remorse and reverting changes.

  • Canopyflyer@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 day ago

    The technology is way too resource intensive for the benefit it gives. By resource, I mean environmental and technological. Have you seen the prices of DDR5 RAM? Microsoft is actually working to bring TMI 1 back online. TMI = Three Mile Island as in a full sized nuclear reactor that has been retired from service since 2019. The only reason why they are not bringing TMI2 back online is because IF F$%KING MELTED DOWN IN 1979.

    Add to that Micron exited the consumer market to provide memory to the AI market only… What the actual F#$k?

    Now the bubble has formed and the people that shoved tens of billions into it are trying to fill that bubble by any means necessary. Which means the entire population of this country are constantly bombarded by it for purposes it is ill suited to.

    When, not if, this bubble pops it’s going to be a wild ride.

    • AxExRx@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      At some point, we should legislate that all non production tech buisnesses have to be energy positive- as in 'wanna build a data center? Its got to have more solar/ wind etc, tha it uses or its unpermitable.

  • switcheroo@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    I hate the fact that thanks to chatgpt, every twerp out there things em-dashes are an automatic sign of something being written by ai…

    As a writer and an em-dash enjoyer, hell with that!

  • finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 day ago

    I’ve seen it successfully perform exactly one task without causing more harm or crearing liability for the people using it:

    Misinformation campaigns.

    And thats exactly how the AI Companies are using to to grow exponentially, lying about its costs and its capabilities both.

    It’s weird that this is somehow an unpopular opinion these days but I don’t like being lied to.

    • AxExRx@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Ive been hearong the claim now occasionally For the last several years that we’ve moved into the ‘post truth’ age. AI has kind of cemented that for me.

  • Greenbeard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    I don’t hate AI, I hate it being forced everyone’s throat and I don’t trust the companies running it to keep the data they collect safe and private

  • EndlessNightmare@reddthat.com
    link
    fedilink
    arrow-up
    37
    arrow-down
    1
    ·
    edit-2
    2 days ago

    No one has convinced me how it is good for the general public. It seems like it will benefit corpos and governments, to the detriment of the general public.

    It’s just another thing they’ll use to fuck over the average person.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      It COULD help the average person, but we’ll always fuck it up before it gets to that point.

      You could build an app that teaches. Pick the curriculum, pick the tests, pick the training material for the users, and use the LLM to intermediate between your courseware and the end users.

      LLM’s are generally very good at explaining specific questions they have a lot of training on, and they’re pretty good at dumbing it down when necessary.

      Imagine an open-source, free college course where everyone gets as much time as they need and aren’t embarrased to ask whatever questions come to their minds in the middle of the lesson. Imagine more advanced students in a class not being held back because some slower students didn’t understand a reading assignment. It wouldn’t be hard to out teach an average community college class.

      But free college that doesn’t need a shit ton of tax money? Who profits off that? we can’t possibly make that.

      How about a code tool that doesn’t try to write your code for you, but watches over what you’re doing and points out possible problems, what if you strapped it on a compiler and got warnings that you have dangerous vectors left open or note where buffer overflows aren’t checked?

      Reading medical images is a pretty decisive win. The machine going back behind the doctor and pointing out what it sees based on the history of thousands of patient images is not bad. Worst case the doctors get a little less good at doing it unassisted, but the machines don’t get tired and usually don’t have bad days.

      The problem is capitalism. You can’t have anything good for free because it’s worth money. And we’ve put ALL the money into the tech and investors will DEMAND returns.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Imagine an open-source, free college course where everyone gets as much time as they need and aren’t embarrased to ask whatever questions come to their minds in the middle of the lesson.

        My impression of the average student today is that they lack so much curiosity, in part because of youtube short–induced ADHD, in part because chatgpt just answers all of their homework questions for them, no effort at all, that a course like this would be functionally useless.

        This is not an issue of capitalism, detestable as it is: young people are using AI to offload the mental burden of learning. Removing money incentives doesn’t fix this.

    • InputZero@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      I’ll say two things that I have actually found useful with ChatGPT, helping me flesh out NPCs in the tabletop RPG campaign I’m running, and diagnosing tech problems. That’s it. I’ve tried to program, have it make professional documents, search things for me, all of it sucks compared to just doing it myself. Definitely not worth poring a significant chunk of the global GDP into.

    • Perspectivist@feddit.uk
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      It’s more like the opposite. There’s not much evidence if it saving money or increasing productivity for companies to the extent that it covers the cost of running it where as for the general population it can be helpful with stuff like writing assistance but I bet most people use it like I do which is entertainment. ChatGPT has 800 million weekly users - people clearly are getting some value from it.

      • Ohmmy@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 days ago

        Of those 800 million, how many are paying? That number could be easily over-represented by people doing things without real value to them. I also don’t know how many of those users need professional help whether it be severe social anxiety or the people who find intimacy in a chatbot.

        Like, you’re right there has to be some value to it but I just can’t see trillions of USD in value.

        • Perspectivist@feddit.uk
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Entertainment value - not monetary. I don’t pay for an AI because it makes me money. I do it because I enjoy using it.

          • Ohmmy@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Entertainment value is value, my dog has value to me but is nothing but a monetary cost. It is in how I enjoy having my dog so much that I will pay the monetary cost because he is that valuable to me.

            Someone downloading and using an app isn’t indicative to that app having much value to the end user.

    • boonhet@sopuli.xyz
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      It’s probably from a redditor who probably is white and male. Y’know, self-deprecating humor is pretty common among redditors just like it is here.

        • boonhet@sopuli.xyz
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          2 days ago

          This isn’t a Twitter screenshot, it’s a Facebook one. Note the globe icon, that means a public post on Facebook. He’s also a niche microcelebrity, so the verification kinda makes sense?

          It seems he’s stopped posting on Twitter after 2024, having 70k posts total - so he must’ve quit cold turkey. No blue check on his Twitter profile.

          There’s also this tweet from him in 2017. I do not think this man is a nazi, or even nazi-adjacent.

          • Duamerthrax@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            2 days ago

            I never said anything about nazis.

            I only know blue check marks from twitter. There’s really not much difference between the two though.

            • boonhet@sopuli.xyz
              link
              fedilink
              arrow-up
              4
              ·
              2 days ago

              …Why is a blue check mark on Twitter bad if not for the fact that paying for it supports a nazi platform? I’m not sure I follow your logic.

              The Meta one isn’t paid, it’s just something you’re given if they can verify you’re who you say you are.

              • Duamerthrax@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                2 days ago

                Mark Zuckerberg isn’t that much different from Elon Musk as far as politics go.

                If you want to be verified, you should just have a personal website that you publicly direct people to. Using someone else’s social media website as your primary soap box has always been madness.

                • jdnewmil@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  2 days ago

                  So you would click accept on my self-signed https website? Want some land in Florida?

      • Electricd@lemmybefree.net
        link
        fedilink
        arrow-up
        3
        arrow-down
        8
        ·
        edit-2
        2 days ago

        just feels wrong

        like it’s making stereotypes feel normal and creating xenophobia or something

        when done humorously, it’s fine, but here it just seems serious

        • boonhet@sopuli.xyz
          link
          fedilink
          arrow-up
          7
          ·
          2 days ago

          But it is done humorously, is that not an obvious joke?

          Maybe not the funniest joke ever, but definitely not something someone’s saying seriously.

  • Rose@slrpnk.net
    link
    fedilink
    arrow-up
    11
    ·
    2 days ago

    I don’t hate AI (specifically LLMs and image diffusion thingy) as a technology. I don’t hate people who use AI (most of the time).

    I do hate almost every part of AI business, though. Most of the AI stuff is hyped by the most useless “luminaries” of the tech sector who know a good profitable grift when they see one. They have zero regard for the legal and social and environmental implications of their work. They don’t give a damn about the problems they are causing.

    And that’s the great tragedy, really: It’s a whole lot of interesting technology with a lot of great potential applications. And the industry is getting run to the ground by idiots, while chasing an economic bubble that’s going to end disastrously. It’s going to end up with a tech cycle kind of similar to nuclear power: a few prominent disasters, a whole lot of public resentment and backlash, and it’ll take decades until we can start having sensible conversations about it again. If only we would have had a little bit of moderation to begin with!

    The only upside AI business has had was that at least it has pretended to give a damn about open source and open access to data, but at this point it’s painfully obvious that to AI companies this is just a smoke screen to avoid getting sued over copyright concerns - they’d lock up everything as proprietary trade secrets if they could have their way.

    As a software developer, I was first super excited about genAI stuff because it obviously cut down the time needed to consult references. Now, a lot of tech bosses tell coders to use AI tools even in cases that’s making everyone less productive.

    As an artist and a writer I find it incredibly sad that genAI didn’t hit the brakes a few years ago. I’ve been saying this for decades: I love a good computerised bullshit generator. Algorithmically generated nonsense is interesting. Great source of inspiration for your ossified brain cells, fertile grounds for improvement. Now, however, the AI generated stuff pretends to be as human-like as possible, it’s doing a terrible job at it. Tech bros are half-assedly marketing it as a “tool” for artists, while the studio bosses who buy the tech chuckle at that and know they found a replacement for the artists. (Want to make genAI tools for artists? Keep the output patently unusable out of the box.)

    • plenipotentprotogod@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      I’m hopeful that when the bubble pops it’ll be more like the dot com crash, which is to say that the fallout is mostly of the economic variety rather than the superfund variety. Sure, that’ll still suck in the short term. But it will ideally lead to the big players and VC firms backing away and leaving behind an oversupply of infrastructure and talent that can be soaked up at fire sale prices by the smaller, more responsible companies that are willing to stick out the downturn and do the unglamorous work of developing this technology into something that’s actually sustainable and beneficial to society.

      That’s my naive hope. I do recognize that there’s an unfortunately high probability that things won’t go that way.

    • jdnewmil@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      The value in LLMs is in the training and the data quality… so it is easy to publish the code and charge for access to the data (DaaS).

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    I don’t hate it, I hate how companies are forcing it in regardless of how stupid it is for the task.

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    15
    ·
    2 days ago

    Anything the billionaire cabal pushes on us I automatically hate. Don’t even need to know what it is. If they are pushing it you know there is some nefarious shit under the hood.

  • Klear@quokk.au
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 day ago

    Is that Brian “Please don’t call me Brian “Brian Kibler” Kibler” Kibler?

  • yesman@lemmy.world
    link
    fedilink
    arrow-up
    109
    arrow-down
    11
    ·
    2 days ago

    AI doesn’t exist. This is like asking an atheist why they hate god.

    If you’re talking about LLMs and the like, they’re unpopular on Lemmy because tech people are over represented here and tech people understand how these technologies work and why all the hype isn’t just false, but malicious.

    • Perspectivist@feddit.uk
      link
      fedilink
      arrow-up
      13
      ·
      2 days ago

      AI has existed for decades. The chessbot on Atari is an AI.

      What doesn’t exist is AGI but that’s not synonymous with AI. Most people just don’t know the right terms here and bunch it all together as if it was all one thing.

      If one is expecting a large language model designed to generate natural sounding language to be generally intelligent like an sci-fi AI assistant then no wonder they find it lacking. They’re expecting autonomous driving from a cruise control.

    • cecilkorik@lemmy.ca
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      2 days ago

      You mean a bunch of advertising and media companies that control and gatekeep the news are hyping something that’s making them trillions of dollars? That seems… so unbelievable!

      • waxy@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        Isn’t the “AI” boom making very little for the companies hawking it and making trillions for the hardware providers? I feel like it’s analogous to the people that sold shovels & pickaxes during the gold rush.

      • I_Jedi@lemmy.today
        link
        fedilink
        English
        arrow-up
        16
        ·
        2 days ago

        Oh, that’s me. Microsoft gave me backdoor access to your computer so I can play against you.

      • pulsewidth@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        2 days ago

        I know you’re meming, but in Civilization (as in most games), you’re playing against predefined scripts and algorithmic rules that the computer opponent has, as well as having cheaper costs for resources than the user at higher difficulty levels - because it cannot compete with a skilled human player at that level (it literally cheats).

        No LLM, no neural network, no deep learning… not ‘AI’ in the modern sense that’s being discussed here.

        • Perspectivist@feddit.uk
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          2 days ago

          It’s not predefined though. The rules of the game are but not its actions. It observes the environment and changes its behavior based on that. That’s narrow intelligence and thus meeting the criteria of AI.

          A chess player isn’t not-intelligent either just because it’s bound by the rules of the game.

          • pulsewidth@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            It is absolutely predefined - if you make the same moves it will give you the same results, every time. Same as playing ChessMaster 2000 from 1986.

            It may narrowly fit into the broad definition of ‘AI’ (like, since the 70s) but that’s not what’s being discussed in this thread.

            Believe what you like though.

            • Perspectivist@feddit.uk
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              2 days ago

              No, it’s not predefined - chess has about 10^120 possible games. That’s astronomically large number which is way too vast to pre-store or hardcode. It’s intelligence through computation, not a script.

              Believe what you like though.

              • zzffyfajzkzhnsweqm@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 days ago

                “Script” in a computer programming sense. An algorithm. So general behavior is most likely predefined. So not a script in a sense that it always does the same thing. This just means its behavior is most likely described using “if” statements. Eg. “If oponent did this, respond with that…” Algorithm can also “remember” some actions and act based on that. However the AI is most probably not activeley learning from your actions. It has all the knowledge predefined.

                Some more advanced algorithms utilize some self learning principles. Buy this is very rare in games since this is resource intensive.

                But even machine learning is not AI. Even LLM is not AI. But at least LLM became a synonym for AI in recent years.

                • Perspectivist@feddit.uk
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 day ago

                  Artificial Intelligence is the broad field of creating machines that can perform tasks requiring human-like intelligence, such as reasoning, learning, or perception. Machine Learning is a subset of AI where systems learn from data without explicit programming. Large Language Models are a specific type of ML model trained on vast text data to generate or understand language. LLMs are very much AI, and while they’ve popularized the term “AI” recently, the hierarchy stands: LLMs are ML, and ML is AI.

    • panda_abyss@lemmy.ca
      link
      fedilink
      arrow-up
      19
      ·
      2 days ago

      Today my boss asked me why Gemini suggested made up columns when he was trying to query our database. I just told him it also makes up fake tables.

      This shit is half baked and really never should have been foisted on the public.

      • msage@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        2 days ago

        This shit has cost the investors untold money, and it was promised to revolutionize the world, so by golly it will, by force if it must.

      • Whats_your_reasoning@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Also not a tech person, but I am an artist. I used to consider going into digital art, but now I’m grateful I didn’t and instead have honed … I guess you can call it “manual” art? As a way to say things I make with my hands? Maybe “analog” or “traditional” art?

        Point is, I haven’t seen an AI create a pencil drawing or an acrylic painting. I get the feeling that as people tire of AI generated images, they may find renewed interest in these distinctly human-made art forms. I suppose we’ll have to wait and see. For now, AI may try to steal forms and ideas, but picking up a pencil or a paintbrush and creating something on a canvas are still out of its reach (thank goodness.)

  • Gorilladrums@lemmy.world
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    edit-2
    2 days ago

    I don’t hate AI, LLMs are incredibly powerful tools that have an incredibly wide range of uses. The technology itself is something that’s very exciting and promising.

    What I do hate is how they’re being used by large corporations. A small handful of big tech companies (Google, Microsoft, Facebook, OpenAI, etc) decided to take this technology and pursue it in the greediest ways possible:

    1. They took open source code, built on top of it, and closed it off so they could sell it

    2. They scrapped all the data on the internet without consent and used it to train their models

    3. They made their models generate stuff based on copyrighted works without permission or giving credit, thus basically stealing the content

    4. But that wasn’t enough for them so they decided to train their models on every interaction you have with their LLM services, so all your private conversations are stored and recycled even if you don’t want that to happen

    5. They use the data from the conversations that you’ve had with the chatbots to build customer profiles about you that they sell to advertisers so they could send you hoards of personalized ads

    6. They started integrating their LLMs into their other products as much as they could so they could artificially increase their stock prices

    7. They aggressively campaign for other companies to buy and integrate their models so both parties could artificially increase their stock prices

    8. In order to meet their artificially induced demand, they’re sucking the life out of the electricity grid, which is screwing over everybody else

    9. They’re also taking over the hardware industry and killing off consumer electronics since its more profitable for manufacturers to sell to AI companies than to consumers

    10. They’re openly bribing, lobbying, and campaigning governments to give them grants, tax breaks, and keep regulations at a minimum so they could do whatever they want and have society pay for the privilege

    11. They’re using these LLMs to cut as many jobs as possible so they could penny pinch just a little more, hence the massive waves of recent layoffs recently. This is being done even if the LLM replacements perform far worse than humans.

    12. All of this is being done with zero regard to the environmental damage caused by them with their monstrous data centers and electricity consumption

    13. All of this is being done with zero regard to the harmful impacts caused to people and society. These LLMs frequently lie and spread misinformation, they feed into delusions and bad habits of mentally unwell people, and they’re causing great damage to schools since students could use these models to easily cheat and nothing can be done about it

    When you put all of this together, then it’s easy to understand why people hate AI. This is what people oppose, and rightfully so. These corporations created a massive bubble and put our economy at risk of a major recession, they’re destabilizing our infrastructure, destroying our environment, they’re corrupting our government, they’re forcing tens of thousands of people into dire financial situations by laying them off, they’re eroding our privacy and rights, and they’re harming our mental health… and for what? I’ll tell you, all of this is done so a few greedy billionaires could squeeze a few more dollars out of everything so they could buy their 5th yacht, 9th private jet, or 7th McMansion. Fuck them all.

    • pulsewidth@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      When people say “I fucking hate AI”, 99% of the time they mean “I fucking hate AI™©®”. They don’t mean the technology behind it.

      To add to your good points, I’m a CS grad that studied neural networks and machine learning years back, and every time I read some idiot claiming something like “this scientific breakthrough has got scientists wondering if we’re on the cusp of creating a new species of superintelligence” or “90% of jobs will be obsolete in five years” it annoys me because its not real, and it’s always someone selling something. Today’s AI is the same tech they’ve been working on for 30+ years and incrementally building upon, but as Moore’s Law has marched on we now have storage pools and computing power to run very advanced models and networks. There is no magic breakthrough, just hype.

      The recent advancements are all driven by the $1500 billion spent on grabbing as many resources they could - all because some idiots convinced them it’s the next gold rush. What has that $1500 bil got us? Machines that can answer general questions correctly around 40% of the time, plagiarize art for memes, create shallow corporate content that nobody wants, and write some half-decent code cobbled together from StackOverflow and public GitHub repos.

      What a fucking waste of resources.

      What’s real is the social impacts, the educational impacts, the environmental impacts, the effect on artists and others who have had their work stolen for training, the useability of the Internet (search is fucked now), and what will be very real soon is the global recession/depression it causes as businesses realize more and more that it’s not worth the cost to implement or maintain (in all but very few scenarios).

      • devedeset@lemmy.zip
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        2 days ago

        I’m really split with it. I’m not a 10x “rockstar” <insert modern buzzword> programmer, but I’m a good programmer. I’ve always worked at small companies with small teams. I can figure out how to parse requirements, choose libraries/architecture/patterns, and develop apps that work.

        Using Copilot has sped my work up by a huge amount. I do have 10 YoE before Copilot existed. I can use it to help write good code much faster. It may not be perfect, but it wouldn’t have been perfect without it. The thing is I have enough experience to know when it is leading me down the wrong path, and that still happens pretty often. What it helps with is implementing common patterns, especially with common libraries. It basically automates the “google the library docs/stackoverflow and use code there as a starting point” aspect of programming. (edit: it also helps a lot with logging, writing tests, and rewriting existing code as long as it isn’t too whacky, and even then you really need to understand the existing code to avoid a mess of bugs)

        But yeah search is completely fucked now. I don’t know for sure but I would guess stackoverflow use is way down. It does feel like many people are being pigeonholed into using the LLM tools because they are the only things that sort of work. There’s also the vibe coding phenomenon where people without experience will just YOLO out pure tech debt, especially with the latest and greatest languages/libraries/etc where the LLMs don’t work very well because there isn’t enough data.

        • Metju@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          LLMs are an okay’ish tool if your code style is not veering from what 99% of the open-sourced codebase looks like. Use any fringe concept in a language (for example, treat errors as values in languages ridden with exceptions, use functional concepts in an OOP language) and you will have problems.

          Also, this crap tends to be an automated copy-paste. Which is especially bad when it skips on abstracting away a concept you would notice if you were to write the code yourself.

          Source: own experience 😄

          • devedeset@lemmy.zip
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            Totally agree. In my day to day work, I’m not dealing with anything groundbreaking. Everything I want/need to code has already been done.

            if you have a Copilot license and are using the newest Visual Studio, it enables the agentic capabilities by default. It will actually write the code into your files directly. I have not done that and will not do that. I want to see and understand what it is trying to do.

        • pulsewidth@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 days ago

          I agree it’s great at writing and frame-working parts of code and selecting libraries - it definitely has value for coding. $1500 bil value though, I doubt.

          My main concern there lies in the next gen of programmers. The work that ChatGPT (and Claude etc) outputs requires some significant programming prior-experience to allow them to make sense of the output and adjust (or correct) it to suit their scope and requirements of the project - it will be much harder for junior devs to learn that skill with LLMs doing all the groundwork - essentially the same problem in wider education now with kids/teens just using LLMs to write their homework and essays. The consequences will be long term, and significant. In addition (for coding) it’s taking away the entry-level work that junior devs would usually do and then have cleaned up for prod by senior devs - and that’s not theory, the job market for junior programmers is dying already.

    • bstix@feddit.dk
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      I think it’s interesting, that they can steal all this stuff and yet be unable to figure out how to sell it.

      All the money, all the data, all the energy, all the computer power, all the political control. And yet, they can’t manage to sell a single dollar worth of their product.

      Of course it’ll be shittified by commericals in and out of the content, and of course that will lead to paid models, but it’s not going to be very profitable, because nobody _really _needs bad intelligence. “Oh, it costs something? No thanks then, we already have intelligence at home.”

      Yes yes, the users are the product, yes, but who then is buying that user data? Commercials and stuff yeah yeah, but at what point does any of this manifest itself as a single fucking sales transaction where a real person pays a company for a real product? Fucking never.

      The whole thing is worthless.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        they can’t manage to sell a single dollar worth of their product.

        Ohh don’t worry, that’s not how this works :)

        We’re still in the venture capital stage. The companies are circle-jerking, paying each other off with venture funds and stock splits. They don’t need to be making money at this point because they’re already getting everything they ask for.

        Those $50-$200 packages from all the big companies are just there to get people used to the idea. They’re making all their money on selling each other useless support chatbots and horrible phone systems claiming they can reduce their staff by half. Well, they could always reduce their staff by half, customers have had to deal with shitty wait times for years.

        You’ll pay for AI by the prices of your software rising. Those costs are absorbed and passed on to you as micro-transactions inside your actual subscriptions and payments.

        Once they managed to get the AI intertwined in every system out there, they’re free to collude as a market and raise prices slowly. AI will be the cost of software inflation and hardware shortages that make anyone with a datacenter or enterprise hardware manufacturing capacity very, very rich.

        It could even be that in the end, this isn’t a bubble, it’s just a grift and it never pops, but because so expensive that your average person can barely eat if they expect to use software tools for their work.