• MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 hours ago

    I felt your comments lacked acknowledgement of: not the downside to using the tools but the wider conversation that uses the keyword “AI” but has really barely anything to do with it

    Yeah, I get tunnel vision like that, when people say “AI is a problem” my focus is on the AI, not the people’s underlying pre-existing problems that haven’t gone away since AI “came out / got big”.

    • OpenStars@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      The word itself keeps changing its meaning - it used to mean ML techniques, then looking forward to gen-AI, now it supposedly means “capitalism distilled”? See e.g. https://www.structural-integrity.eu/is-there-a-need-for-ai-after-capitalism/ for an excellent example of the kind of anxiety surrounding AI that we are talking about.

      I agree with you that ML itself is not a problem, nor even is LLM technology. Although like nuclear power, as we advance towards true AI the more powerful the tool the greater danger its misuse portends, as you said. And also as you said, as it got big the discussion moved towards the latter topic, without bothering to be precise in what was being discussed, instead calling everything by the (clickbait?) buzzword “AI”.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        The “danger line” I perceive is when we give anything “agency”. It can be a float-level-switch on a lake controlling the water release gates on a dam, such a simple thing, but if it has a malfunction (and nobody notices in time) the dam might get over-topped with water, or the whole lake might be emptied - potentially flooding downstream communities, or simply wasting valuable water needed to get through the next dry season… all that from a simple little (binary) bit of “artificial intelligence” - but when it’s granted “agency” to operate the flood gates without competent oversight, it becomes dangerous.

        May 6, 2010 a large collection of automated trading algorithms, acting with agency too fast for anyone to manage caused a dramatic flash-crash of the stock market.

        Lately, we’ve got <a href=“https://en.wikipedia.org/wiki/ELIZA”>ELIZA</a> gone wild in advanced chat-bots. People who allow themselves to be sucked into the fantasy that the chatbot “is real” like a person they can trust are giving those chat-bots agency in their lives - and with a baseline of 132 suicides per DAY in the US alone, of course there will be some people whose decision to take their own life was influenced, both for and against, by their interaction with chat-bots.

        I give the LLMs (limited) agency in the creation of software. I like to think I employ a risk-based approach, giving more agency and less oversight in simple applications with limited to near-zero risk while providing stricter oversight and review for LLM generated code which has more important functions / greater risk of harm should it malfunction… Of course, these are judgement calls, and with millions of people using LLMs to generate code, even if they all follow a similar risk-based approach to how much unrestricted agency the LLM is given, there will be those who make bad judgement calls…

        Then there’s the YOLOs - pushing the boundaries as hard and fast as they can in some sort of quest to be the first to achieve something great. As Olivander said to Harry Potter: “He who must not be named did great things, terrible to be sure, but also great.”