AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.

Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.

Future bots could help design and carry out biological, nuclear, and chemical warfare. They could be weaponized to take down power grids, monitor congressional emails, and black out major media outlets. These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview, capable of orchestrating cyberattacks on the level of elite, state-sponsored hacking cells, potentially putting a private company’s cyber offense on par with that of the CIA and NSA. In an example of Mythos’s power, Anthropic researchers described how the model used a “moderately sophisticated multi-step exploit” to work around restrictions and gain broad internet access, then emailed a researcher—much to his surprise—while he was eating a sandwich in the park.

Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI. Murmurs of possible tactics abound—including more talk within the administration of the DPA after Anthropic’s Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully. In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization. Lawyers who represent Silicon Valley’s biggest AI firms are paying attention.

Worth noting, later in the story it’s pointed out why full nationalization is vanishingly unlikely, but more federal oversight is likely.

  • TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    23 hours ago

    AGI wouldn’t be about a model that knows everything about language and every advancement in every field, but rather a model that is better than humans at finding solutions to problems

    A LLM (or any other kind of model) that cannot adapt to changes in a field cannot perform better than humans in that field after that field experiences significant changes. Any such model would eventually degrade in output quality over time.

    Also, AGI (artificial general intelligence) usually refers to an AI capable of performing all cognitive tasks at least as well as a human. It’s as much of a buzzterm as “AI” is, of course, so there’s an endless number of definitions for it. Such an AI should be capable of, at minimum, adaptation over time.

    The point of a “smarter” model is not that it knows all the facts, that would be wasteful as it is trivial to look up facts at inference time.

    An omniscient model would be impossible, but that’s not what I was referring to at all. LLMs these days fill their context windows with relevant information through careful prompting, tool calls, and so on. This is generally how a model is supposed to adapt. Context windows are bounded in size, though. It would have an increasing amount of information to include in that window over time, meaning the amount of data it needs to fit in the context window is unbounded.

    Unless someone creates a LLM with infinite context (which would require infinite VRAM), such a LLM can never exist. Therefore, a LLM trained today will never be equivalent to (or better than) humans at all cognitive tasks for the entire future of humanity. There will always come a point where such a LLM’s output quality degrades, and it can do nothing to resolve that.

    Edit: Here’s a simple example: a new written language emerges with all the complexities of a language like English. Humans can learn that language and communicate in it. A LLM cannot.