• yeahiknow3@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    15 hours ago

    isn’t an AI replacing artists evidence it has an experience

    I can only speak about the literary world, and I was quite sanguine about ChatGPT in the early days, before I learned about how LLMs actually work. Having experimented with these tools extensively, I am certain that not a single page of good fiction has ever been produced by these statistical models. Their banality is almost uncanny — unless you know how they work, in which case it makes sense.

    Now to be fair, fewer than 1 in 100 people can write fiction well, and fewer than 1 in 10,000 can do it at a level I’d consider “art” (as opposed to amateur dabbling).

    LLMs are limited by the mathematics of their design. They’re just tracking weighted averages about what word comes next. That’s why they’re so good at corpospeak and technical writing, and so utterly worthless and cringey at writing fiction (or “art”).

    If a collection of cells can be creative, then an extremely large mathematical system embodied in a GPU could also, potentially, be creative.

    Sure. And a hundred monkeys with typewriters could reproduce the works of Shakespeare. Like you said, the issue is how to do it consistently and not in an infinite sea of garbage, which is what would happen if you increase stochasticity in service of originality. It’s a design limitation.

    I have no idea what it’s “like” to be an LLM

    The same thing that it’s “like” to be a fax machine. They’re not significantly different, and you can literally program an LLM inside a fax machine if you wanted to.

    Anyway, leaving you with the thought that you can’t compare “a collection of cells” to digital computers for two reasons.

    1. Cellular activity is the domain of biologists, who do not study creativity or art. We have absolutely no idea how the tiny analog machinery of multicellular organisms give rise to consciousness.

    2. Comparing digital stuff to analog stuff is a category error.

    • “If a collection of cells can be creative, why not a mathematical system in a GPU?”

    • “If a collection of cells can be creative, why not cheeseburgers?”

    In both cases the answer is potato.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 hours ago

      Biological neurons are actually more digital than artificial neural nets are. They fire with equal intensity, or don’t fire (that at least is well understood). Meanwhile, a node in your LLM has an approximately continuous range of activations.

      They’re just tracking weighted averages about what word comes next.

      That’s leaving out most of the actual complexity. There’s gigabytes or terabytes of mysterious numbers playing off of each other to decide the probabilities of each word in an LLM, and it’s looking at quite a bit of previous context. A human author also has to decide the next word to type repeatedly, so it doesn’t really preclude much.

      If you just go word-by-word or few-words-by-few-words straightforwardly, that’s called a Markov chain, and they rarely get basic grammar right.

      Like you said, the issue is how to do it consistently and not in an infinite sea of garbage, which is what would happen if you increase stochasticity in service of originality. It’s a design limitation.

      Sure, we agree on that. Where we maybe disagree is on whether humans experience the same kind of tradeoff. And then we got a bit into unrelated philosophy of mind.

      and you can literally program an LLM inside a fax machine if you wanted to.

      Absolutely, although it’d have to be more of an SLM to fit. You don’t think the exact hardware used is important though, do you? Our own brains don’t exactly look like much.

      • yeahiknow3@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 minutes ago

        Biological neurons are actually more digital than artificial neural nets are.

        There are three types of computers.

        1. Digital
        2. Analog
        3. Quantum

        Digital means reducible to a Turing machine. Analog, which includes things like flowers and cats, means irreducible by definition. (Otherwise, they would be digital.)

        Brains are analog computers (maybe with some quantum components we don’t understand).

        Making a mathematical model of an analog computer is like taking a digital picture of a flower. That picture is not the same as the flower. It won’t work the same way. It will not produce nectar, for instance, or perform photosynthesis.

        Everything about how a neuron works is completely undigitizable. There’s integration at the axon hillock; there are gooey vesicles full of neurotransmitters whose expression is chemically mediated, dumped into a synaptic cleft of constantly variegated width and browning motion to activate receptors whose binding affinity isn’t even consistent. The best we can do is build mathematical models that sort of predict what happens next on average.

        These crude neural maps are not themselves engaged in brain activity — the map is not the territory.

        Idk where you got the idea that neurons can be digitized, but someone lied to you.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 hour ago

          I’m not trying to be cheeky or dismissive, but: https://en.wikipedia.org/wiki/Analog_signal

          It’s not about irreducibility - that’s not a feature any part of physics has. Even quantum states can be fully simulated by a digital computer, just with prohibitive (ie. exponential in qubits) overhead. It’s about continuous vs. discrete, and a very large number of discrete states can become indistinguishable from continuousness. Sometimes provably.

          It’s true that the internal functions the determine whether neurons fire are poorly understood. Once we have that data it will absolutely be possible to simulate, though. It’s long been done for individual organoids, and at this point the hardware has scaled enough to look at doing an entire bacterium and it’s nearby environment. If the interactions of a random patch of water molecules can be neglected - and usually biochemists do so - that software could be made much much lighter yet.

          I’d like to point out Earth’s weather systems are continuous, bigger and far more chaotic. If biology was irreducible, meteorology would be as well.

          • yeahiknow3@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 minutes ago

            I explicitly explained that you can model an analog machine using a digital computer. When you make a topological map of a weather system (or a brain) or take a digital picture of a flower, you are generating a model. This is the subject of the articles you linked me.

            No matter how accurate your digital model of a weather system, however, it will never produce rain. The byproduct of Turing machines (digital models) is strictly discrete.

            • Thoughts are a byproduct of brains, just as rain is a byproduct of weather and torque is a byproduct of internal combustion engines.
            • You could generate rain, torque (and maybe thoughts) in various contexts, of course. But not with Turing machines, whose only possible outputs are 1s and 0s.

            You can model digital computers using analog computers. And the reverse is also possible. But digital systems are substrate-independent, whereas analog systems are substrate-dependent. They’re fundamentally inextricable from the stuff of which they’re made.

            On the other hand, digital models aren’t made of stuff. They’re abstract. You can certainly instantiate a digital model within a physical substrate (silicon chips), the way you can print a picture of an engine on a piece of paper, but it won’t produce torque like an actual engine let alone rain like an actual weather system.

            On a separate note, you reallllly need to acquaint yourself with Complexity Theory, if you actually believe our models will ever be anything other than decent estimates.

            To learn more, please take a Theoretical Computer Science course.

            Irreducibility isn’t a part of physics

            Correct. It’s theoretical computer science. Again, analog systems are irreducible to digital ones by definition. They can only be modeled (functionally and crudely).