The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, “Reasoning” models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what’s next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

      • PhoenixDog@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        2 days ago

        Someone else in the comments said it perfectly. AI is just data regurgitation. It’s like calling me highly intelligent because I read you a paragraph from Wikipedia. I didn’t know anything. I just read a thing and said it out loud.

        • mechoman444@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          8
          ·
          1 day ago

          No. You’re not just wrong, you’re aggressively uninformed.

          By you repeating the same tired “AI is just regurgitating data” line makes it clear you don’t understand what you’re criticizing. Calling large language models “AI” the way you are doing it just exposes that you do not know what you are talking about. It is like a creationist smugly saying “orangutang” instead of “orangutan” and thinking they sound informed. You are not demonstrating insight. You are advertising ignorance.

          What you’re describing, reading a paragraph off Wikipedia, is literal retrieval. That is not how modern language models operate. They are not databases with a search bar attached. They are probabilistic systems trained to model patterns, structure, and relationships across massive datasets. When they generate a response, they are not pulling a stored paragraph. They are constructing output token by token based on learned representations.

          If it were just regurgitation, you would constantly see verbatim copies of training data. You do not. What you see instead is synthesis. Concepts are recombined, abstracted, and adapted to context. The system can explain the same idea multiple ways, shift tone, handle novel prompts, and connect ideas that were never explicitly paired in the source material. That is fundamentally different from reading something out loud.

          Your analogy fails because it assumes nothing is being transformed. In reality, transformation is the entire mechanism. Information is compressed into weights and then expanded into new outputs.

          Is it human intelligence. No. Is it perfect. No. But reducing it to “just reading Wikipedia out loud” is not skepticism. It is a basic failure to understand how the technology works.

          If you are going to criticize something, at least learn what it is first.

          • lordbritishbusiness@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 day ago

            Counterpoint: Why should they learn about it?

            It is a good thing to reduce ignorance, but there is more to learn in the world than there is time to learn or space in the brain. People must specialise.

            You must accept that not everyone will understand everything, and this is okay.

            The nature of a Large Language Model is very specialist knowledge, data regurgitation is apt from a distance, especially when most publically available models are primarily used for search.

            Criticism must be accepted, even from those who do not understand, so long as it’s in good faith. It is after all an opportunity to reduce ignorance to someone with the time and interest to learn.

            Don’t rudely lord your intelligence over someone else, it might not end well, and invalidates the delivery of your entire argument.

            • mechoman444@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              1 day ago

              The reason he should learn about it is because he’s talking about it as though he’s informed and he is not.

              I don’t have to be a LLM programmer working at openai to have a working knowledge of how these machines function. It’s literally just a Google search.

              He made an unreasonable ignorant comment and I called him out. He should feel ashamed and I have absolutely no reason to pad down what I’m saying under the guise of being nice.

          • PhoenixDog@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            edit-2
            1 day ago

            This might be the most comprehensive comment I’ve ever read about someone saying how utterly stupid they are to the world. It’s incredibly impressive how articulate you described your absolute lack of critical thinking.

            It’s almost like intentionally shooting yourself in the nuts, and openly releasing the video of it saying you promote gun safety.

            • mechoman444@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              1 day ago

              Calling an llm a Wikipedia regurgitator is factually and objectively incorrect.

              Is there anything that you can say to refute the facts that I presented in my above comment?

              (I rolled my eye so hard at your comment that I pulled my back out)

          • hitmyspot@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            You’re discounting the fact that a human reading Wikipedia will attribute intonation and tone to the text to give further context and meaning. I think the analogy is good. Its not precise but it is the same thing.

            I do think AI has a useful purpose and is here to stay. I don’t think it’s groundbreaking like the AI companies want us to think. The bubble will burst and then we’ll see where the cards lie.

            OpenAI has lost their lead and I expect they will start to struggle with further funding. There are quite a few warning signs. The price of oil is likely to increase power prices generally and cause construction delays and cost rises. Both will hamper their plans. They still don’t have a viable model for profit.

            • mechoman444@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              1 day ago

              The analogy is terrible and is not at all, once again, what llms do.

              This is an objective fact I have provided evidence to support this.

              How are you saying the analogy is good?

              • hitmyspot@aussie.zone
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 day ago

                Ana analogy does not need to be precise. It expresses a comparison for easier understanding. It is not what LLMs do. However what you’ve expressed is simplified also. So by your standard, it is not useful for the discussion.

                So maybe get your head out of your ass and try to understand what people are trying to express instead of correcting them when they are not incorrect.

                If precision was of that much importance to you, you would have a different opinion of LLMs.

                • mechoman444@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 day ago

                  I fully understand the analogy being presented. It is a poor analogy and fundamentally incorrect because that is not how LLMs function. They do not “read back Wikipedia pages,” which is a complete misunderstanding of the technology, not a minor lack of precision.

                  I am not disputing that it is an analogy, nor am I claiming that exact precision is necessary to analyze it. The point remains: the analogy fails.

                  What is curious is how people focus on my tone, saying I am aggressive or should be more precise, rather than engaging with the substance of my argument. So far, no one has directly refuted my points. This suggests that many responding are simply following the anti-AI bandwagon without understanding the technology, which is both reductive and disappointing.

                  • hitmyspot@aussie.zone
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    arrow-down
                    1
                    ·
                    1 day ago

                    No, the analogy is about not understanding but regurgitating data. It’s more complex than that but the gist is that they don’t understand or have knowledge of the data being presented.

                    They are statistical models for what is desirable output. They don’t understand what they give as an answer. That is why they halluncinate information that sounds plausible and confident.

                    We’re not refuting your point about how the technology works, but rather that the person you replied to provided a poor analogy. They didn’t. It served the purpose it was designed to do. If you don’t understand that, that’s on you, not them. Maybe ask an ai to explain. ;)