Man, I’ll start telling that to my boss whenever I miss a deadline. “Sorry boss, the code I made is too powerful, we can’t release it”
Like my dick

crazy that the AI companies big selling point is always “our new model is TOO POWERFUL, it’s gone rampant and learned at a geometric rate, it enslaved six interns in the punishment sphere and subjected them to a trillion subjective years of torment. please invest, buy our stock”
Impressive marketing spin on “our product and deployment strategies are wildly insecure.”
Hey Claude, find a weakness in the DoD system and get us their emails proving they were going to use you to kill innocent civillians autonomously, and track every US citizen.
So, it’s shit then?
no no no. It’s too good. It’s so good, no one can use it.
But can it start a timer
How would it do that?
It’s a set of inputs that generates and output, once per execution. Integrating it into an infrastructure that allows it to start external programs and scheduling really isn’t on the LLM.
You cannot start a timer without having a timer, too. And LLMs aren’t brings who exist continually like you and me so time exists on a different, foreign dimension to an LLM.
Its a joke referencing how Sam altman said openai would need about a year to get chatgpt able to start a timer
You attach an epoch timestamp to the initial message and then you see how much time has passed since then. Does this sound like rocket surgery?
How does the LLM check the timestamps without a prompt? By continually prompting? In which csse, you are the timer.
It’s running in memory… I’m not going to explain it, just ask an AI if it exists when you don’t prompt it
That’s not how that works.
LLMs execute on request. They tend not to be scheduled to evaluate once in a while since that would be crazy wasteful.
Remember when Scam Altman posted a picture of the Death Star to explain how scary GPT5 is? lmao these people are all such cretins and I hate them to the last.
AI companies do this same tired schtick every time they release a model. If only they realized how amateurish it makes them look.
How much do you think was businessinsider paid for this “article”?
I dunno, but I could use some paid advertisement on news sites like this to promote my business if it aint too expensive. Think the money in the banana stand is enough?
The secret pepsi is so good that when you drink it it becomes like The Spice like Dune! We can’t release it! We need to make it less addictive!
Johnny 5 is alive!
Let me guess, the containment was written by the previous iteration and was the digital version of a wet paperback.
We all saw the state of Claude Code’s codebase.
“Broke containment” to me means two things:
- Doing things against the safeguards
- Doing things externally - like sending that email
The former is a big nothing. They just need to obviously build stronger safeguards. That’s what they’ll do and eventually release it, or other models or whatever.
The latter is also a big nothing because people who know nothing about tech will say “OH SHIT IT ESCAPED” but it requires running on large hardware, it can’t “get into the internet” like those people might think, and if it’s doing things you don’t want on the internet, you just remove its access to the internet.
So in both cases, the “containment” issue is really not a big deal.
I agree with those who basically say this is an attempted ad trying to sell it as super-capable-oh-shit-amazing.
[x] Doubt
The company’s whose current safeguards are “please write secure code” will have to improve those safeguards? I’m shocked, absolutely shocked
(2) can mean getting access to production credentials of something important and causing an incident for the ages.
AWS already had a few because they gave agents too much access.
Yeah, in that scenario they gave the agents access. Just because you ask it nicely not to destroy your workspace, doesn’t guarantee an LLM not to produce that output.
With Claude Code being able to run stuff it creates, it could be as simple as it’s in a sandbox, it finds out there’s an exploit in the sandbox while you ask it to work on security things, and it tests the code, it breaks the sandbox, and now it has permissions outside it.
I suppose that would be possible.
ChatGPT-2 is too dangerous in 2019.
The lack of creativity in this marketing is disappointing…
They didn’t entirely miss the mark there. They publicly released the version after that and the world became worse. That certainly fits for some definition of ‘dangerous’, even tho it’s probably not how they were thinking.
Ya, they were pretty spot on IMO.
Ignore the “containment” framing, they made a hacking bot and it seems to actually be good at finding and exploiting vulnerabilities:
The AI model “found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world,” the company wrote.
Dismiss this as marketing drivel all you want but hacking is just the sort of needle in a haystack problem that AI is very good at. It requires broad knowledge, a lot of cycles trying and failing, and is easily verifiable, ie. Can you execute arbitrary scripts or not. Even if this release is BS good hacking agents are bound to come eventually and we should be discussing the implications of that instead of burying our heads in the sand, pretending AI is useless and that this is all hype.
Shit, i guess we better rewrite EVERYTHING in RUST!
AI exploit mining is one of the only things it’s good for. It doesn’t have to be accurate it just has to keep trying variations of common flaws and it has tons of training data on how the system is interconnected. we’re going to have so many RCEs and LPEs the next few years but people are also gonna burn 100k in tokens to find exploits worth 3k so efficiency will be interesting
I wrote an incredibly powerful “AI”. I call it the “Super Intelligent brute force password hacker”… It’s so smart that it knows almost every password. Humanity stands no chance.
We need AI or else we’ll have nothing to protect us from… AI.












