

Also, a bunch of shit is about ready to burst out because they somehow decided to use wallpaper to hold a bunch of stuff to the wall instead of putting it in a closet. But it looked fine in the moment, so decided it was good enough.


Also, a bunch of shit is about ready to burst out because they somehow decided to use wallpaper to hold a bunch of stuff to the wall instead of putting it in a closet. But it looked fine in the moment, so decided it was good enough.


Based on my experience with LLM and developers I personally know, my only assumption is they don’t have the skills in the first place…
In corporate world there are a lot of “developers” that actually act kind of like codegen. They just throw plausible sounding bullshit into an editor and hope for the best. Two examples:
Once asked to help a team speed something that ran slow, even by their low standards. Turned out they had made their own copy file routine instead of using the standard library one, and sucked the file into memory, expanding array 512 bytes at a time, and then wrote it out, 512 bytes at a time. I made the thing nearly instant by just making it a call to the standard library function to copy a file.
While helping with a separate problem, I noticed their solution for transferring some file with an indeterminate version number in the middle of the file name. It was a huge mess, but the most illustrative line was the line in their Java application declaring a string “ls /path/with/file|grep prefix.*.extension”…
Lots of human slop out there that AI can actually compete with.


I just don’t get it, even the purportedly best models screw things up so much that I can’t just leave them to the job without reviewing and fixing the mess they made… And I’m also drowning in pull requests that turn out to be broken as it proudly has “co authored by Claude” in it… Like it manages to pass their test case but it’s so messed up that it’s either explicitly causing problems, or had a bunch of unrelated changes randomly.
I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.
Closest I got was a chore that had a perfect criteria “address all warnings from the build”. Then let it go and iterate. Then after 50 rounds each round saying “ok should be done now, everything is taken care of, just need to do a final check”. It burned though most of my monthly quota doing this task before succeeding. Then I look at the proposed change… And it just added directives to the top of every file telling the tools to disable all the warnings… This was the best opus 4.6 could do…
Now sure, I can have it tear through a short boiler plate and it notice a pattern I’m doing and tab through it. But I haven’t see this “vibe” approach working at all…


Yeah, it’s crazy.
The only times we ever have balanced budgets/close to balanced budgets is under democrat leadership, and businesses consistently thrive under them too, with good economic indicators consistently.
Yet so many people still think republicans are “good on economy” and “good for fiscal responsibility”.
I suppose he should have said “gay ewes would be hobbled” and lesbian sheep would be incapable of coupling up".
His quote shows that people say there’s no way of knowing one way or another, like you say, but Doctorow changed the language to be a bit more suggestive of certainty that the phenomenon exists.


While CATL has done work and has made real world solutions, I wouldn’t take their marketing material over the broader scientific consensus.
I am totally willing to buy that the market ends up between NMC and Sodium with LFP left behind despite having some advantages over sodium. Of course as solid state becomes a thing, that will be more of a factor than Na v. LFP v. NMC.


I think I’ll need a citation, from what I can find, the LFP chemistry still is more dense than CATL sodium, which makes sense because, well, the physics are what they are, sodium is about three times more massive than lithium. The best argument I could see on this point is debating whether there’s a space in the market between sodium and NMC for LFP (if you are already compromising on density, then what’s another further compromise to get the other qualities you mention for sodium).


less performant
Well this is a matter of how you define “performant”.
It’s got lower energy density, which is generally considered a critical measure.


Yeah, but the real developers regard it with disdain. I guess he thinks its only an image problem and a rebrand will fix it.


I’ll say “vibe coding” to me implies the operator has zero awareness of the actual code, and there is something wrong.
They treat the actual program logic in the same way folks treat assembly code as some arcane black magic they don’t have to think about. Problem is the tooling is not nearly so deterministic as a compiler, and the output is just too bad to be relied upon.
For certain clasesses of tasks, it may do a serviceable job, maybe at first. If you have ongoing evolution requirements, it can dig itself a whole that it can’t really dig out of. It can’t process the code that had been generated to extrapolate a code change to match the change request.
The GenAI coding needs supervision, and ‘vibe coding’ implies opting out of careful supervision.


This is just so fitting.
I keep getting merge requests now from people that their whole job to date had been too scared by the syntax to try coding.
It’s almost always a shotgun of way too many lines of code changed for a small thing, often with horrible side effects that would be unacceptable.
Someone wanted to tweak the CSS layout of one element, what should have been a one line change. The pull request had hundreds of css changes, basically touching everything. Clearly the model had started changing things and he kept saying it didn’t do it yet until finally it did and it never rolled back anything it did, including many of the rules being repeated 5 times in a row in the same place…
They felt like AI was making them so helpful because they could submit a code change directly instead of just asking for what they want. They proudly said “AI told me:” and then explain the brilliance of the AI finding. One time the AI finding was addressed over 6 months prior, the AI never thought to update the software, but instead proposed a really crap workaround that would have failed to cover a whole class of similar scenarios while simultaneously imposing crazy side effects on scenarios that weren’t tested.
I can use AI too, please just send me what you would have sent to the AI, and if AI can do it, I could use the AI. If you think the AI will figure out how you are using something wrong and don’t want to bother/wait for a human to help, fine, but if it gets to what it thinks is a software bug, just rewind and start from your problem statement when you come to me…


No, these are just ‘love taps’
my pp gets hard
The way a lot of them are, they probably wish it still did that.
Yep, evolution always ends in crabs.
If you are already there, why bother?


Yes, the non-determinism is crazy.
I have like one thing I use voice for usually. “Call <name>”. With Google Assistant, it reliably called that specific person.
Now that my phone decided to gemini, it will sometimes make a call, and sometimes it says something like “I have found one contact with that name in your contacts, their phone number is 1-555-555-5555” Sometimes with some extra language clearly intended to be stuffed back into context to guide some next step that isn’t coming, don’t remember but something along the lines of “Contact match added to context to enable dialing the phone now” or something.
I’m perfectly fine with a different wake word or chaining it to google assistant, “Hey google, ask gemini …” would be fine.
And yes, it might be vaguely useful for doing a maps search in the car, as that is a pain. A vaguely decent answer I can confirm is nice for things like a road trip stop for food or some small thing.


Phone use can be “banned” while allowing them to have phones on their person for emergencies.
Just banning them being out.


In our case, the phones are allowed to be on their person just not allowed to be brought out of the pocket or whatever except in case of emergency. Even between classes and lunch.
Some classes institute a “phone cabinet” where students are expected to put their phones in the classroom during class.
So the phones are always at hand, but not actively messing with their lives.


My kid goes to school where they recently instituted this strategy.
It feels like I’ve seen a marked improvement in their social behaviors.
Between smartphones and the COVID years, this generation has had it rough for social development…


Mine would be: “I have no idea” - An answer the LLMs generally refuse to give by their nature (usually declining to answer is rooted in something in the context indicating refusing to answer being the proper text).
If you really pressed them, they’d probably google each thing and sum the results, so the estimates would be as consistent as first google results.
LLMs have a tendency to emit a plausible answer without regard for facts one way or the other. We try to steer things by stuffing the context with facts roughly based on traditional ‘fact’ based measures, but if the context doesn’t have factual data to steer the output, the output is purely based on narrative consistency rather than data consistency. It may even do that if the context has fact based content in it sometimes.
But what about monster HDMI cables?