The immediate catalyst, it seems, is an intensifying focus on capex, or capital expenditures. Microsoft revealed that its spending surged 66% to $37.5 billion in the latest quarter, even as growth in its Azure cloud business cooled slightly. Even more concerning to analysts, however, was a new disclosure that approximately 45% of the company’s $625 billion in remaining performance obligations (RPO)—a key measure of future cloud contracts—is tied directly to OpenAI, the company revealed after reporting earnings Wednesday afternoon. (Microsoft is both a major investor in and a provider of cloud-computing services to OpenAI.)

    • Earthman_Jim@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 hours ago

      The gig was up for me when I tried to get it to play dungeon master in a game of DnD. It would start out great, but eventually it would forget what we were doing and instead of giving me choices it started just telling me the story of me playing dnd and it would stop giving me options. This would happen about 6 minutes into playing, or 3 or 4 “turns”, and that’s when I realized the incredible memory sync it is if it can’t reference instructions given moments ago. A newer model won’t fix that.

      At the end of the day it’s complex predictive text that amounts to a Rorschach test.

      • Wispy2891@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 hours ago

        You need to do a custom program if you want to do that. I mean a traditional program where variables are stored properly.

        The models have no memory at all, at every question it starts from scratch, so the clients are just “pretending” it has a memory by simply including all previous questions and answers in your last query. You reply “ok”, but the model is getting thousands of words with all the history.

        Because each question becomes exponentially expensive, at some point it starts to prune old stuff. It either truncates the content (for example the completely useless meta ai chatbot that WhatsApp forced down the throat loses context after 2-3 questions) or it uses the model itself to have a condensed resume of past interactions, but this is how it hallucinates.

        Otherwise it will cost like $1 per question and more

        • Earthman_Jim@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          3 hours ago

          Which kind of illustrates the fundamental flaw right? Videogame companies have spent decades creating replayable DnD esc experiences that are far more memory efficient and cost effective. They already kind of do it the best way. AI can assist, and things like the machine learning behind the behaviors of the NPCs in Arc Raiders for example is very cool, but as you said, you need a custom program… which is what a video game is, so I guess my point is I don’t see the appeal in re-inventing it through sort of automated reverse engineering.

          • postscarce@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            58 minutes ago

            LLMs could theoretically give a game a lot more flexibility, by responding dynamically to player actions and creating custom dialogue, etc. but, as you say, it would work best as a module within an existing framework.

            I bet some of the big game dev companies are already experimenting with this, and in a few years (maybe a decade considering how long it takes to develop a AAA title these days) we will see RPGs with NPCs you can actually chat with, which remain in-character, and respond to what you do. Of course that would probably mean API calls to the publisher’s server where the custom models are run, with all of the downsides that entails.