

Eye for an eye and the world goes blind


Eye for an eye and the world goes blind


At multiple government offices I have seen them bring out someone to match the language spoken when someone has no or poor English.
It is far easier to speak English because practically speaking English is most prevalent, but it’s not like inability to speak English is a crime (though with this administration…)


Yeah, I was thinking it was because of all the tells in one comment in such a perfect context is just too on point…
But a bit triggered because I recently spent a a bit of time trying to figure out if someone replied to me was on something because their reply was so weird, irrelevant, and such vaguely annoying before I realized it was his LLM authored out of office message trying to be ‘cute’.


Indeed:
ChatGPT determined that this was related to DEI, responding, “Yes. Improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences. #DEI.”


As long as you follow through to actually source the original, instead of assuming the quotes provided are intact. The point was in the case above, DOGE was doing no follow up, and most people who look to that as a ‘summary’ assistant aren’t wanting to dig deeper.
Hell, even without AI lawmakers frequently got caught admitting they didn’t read the law they signed, they didn’t have time for that. Now with AI summaries as an excuse…


Except they can screw up at that role.
There’s a lawsuit because DOGE asked ChatGPT to summarize projects DEI-ness, and for example it declared a grant for fixing air conditioning was a DEI initiative


Question is if the comment is slop or slop parody…


More of an “everyone’s shit here” situation


To be fair, the financial market is deeply rewarding the “tell us what we want to hear” approach.
Even if the time should come where the chickens come home to roost, the key players will have gotten billions out of the mania in the meantime.
So on one hand you have someone making a fair pessimistic assessment of current approaches that isn’t attractive to investors and his suggestion is very unproven. On the other hand you have someone that agrees with whatever the investors want to believe. The latter is, in this situation, an easy payday.


“Our gods are dead. Ancient Klingon warriors slew them a millennia ago. They were more trouble than they were worth.” -Worf


Just talked to someone here on business and he was unfair to have been sent here and his family had wanted to go to American amusement parks for a vacation and he said absolutely no way with this administration.


Problem with being a business is that Atlassian isn’t so much really a software company as much as they are a marketing thing that pretends to be software.
Agile consultants say “Atlassian”, companies lap that up at the executive level and the employees roll with it because selecting Atlassian is “thought leadership”. The people picking Atlassian are not the people using Atlassian. Paradoxically typical Atlassian rooted workflows are about as far from being actually agile as you can get.


As much as this is overly simplistic, there’s a sort of appeal here…
The good news when you have proper issue management is that you don’t lose any issues. The bad news is you don’t lose any issues.
In my work, the issue tracker has issues that are over 5 years old. Any time someone dares to just purge those, some one comes out of the woodwork to suddenly passionately care about this thing they have forgotten for years until the jira notification triggered them.
Projects that have pristine issue discipline tend to suck, as they waste so much energy on things that didn’t matter whether or is fixing or engaging in an argument about the value. The better projects tend to say “fine, we will hold that issue in low priority backlog and get to it if we ever run out of better stuff to do”, and the submitter is placated and everyone knows we will never run out of better stuff to do.


I don’t see how sailing around Africa is a plausible alternative to the Strait of Hormuz…
It basically just means zero access in/out of the Persian Gulf at all. The closest thing to an alternative is overland access to the Red Sea through Saudi Arabia.
Now if we were talking about the Suez canal, that would make sense.
I was about to say that as bad as Outlook was, I actually used its search to figure out some Jira ticket because just… damn trying to find a Jira ticket based on a few keywords is just a pain in the ass…


This is the fascinating thing about this bubble. Usually people are suspecting a bubble/perceiving it, and are afraid of when it pops, but no one really wants it to pop, they just don’t like the fragility it causes knowing it could pop any minute.
So many people actively want the AI bubble to pop. I can’t recall a bubble so odious that everyone was rooting for it to hurry up and fail before.


I agree with you and I consider it similar to the ‘hollywood effect’: Ask any expert to review typical depictions of their expertise in film and tv and they will mostly groan at the inaccuracies that most people won’t catch.
Problem is that if you compare the works that do it ‘right’ to the ones that do it ‘wrong’, there’s no correlation between doing it right and being more popular, the horribly wrong depictions get plenty of ratings regardless.
Now one might reasonably argue ‘sure, but that’s purely fiction anyway, if it had real consequences, that would actually matter’, except it constantly happens in real world situations.
My work colleague picked up his car from some mechanic chain after having it ‘fixed’ and took us to lunch. There was just this awful squeal as he started the car and I said why is it making that noise after just getting fixed and the guy said “Oh, the staff told me that cars just sound like that after a repair until the parts break in” and that bullshit worked to get him to pay and walk out the door. I ask if I can take a quick look under his hood and there was a flashlight wedged against a belt. He just laughed it off and said “hey, free flashlight, thanks for figuring that out” and a few months later he had mentioned going back to the exact same place for something else.
A few days ago I went to a hardware store and their site said they had it, but under location it said “see associate”. The first one checked his device and didn’t understand what the deal was so he said “Oh, go over there and ask John, he knows all this stuff”. Ok, so I walk over to John, who takes one glance and confidently says “oh yeah, that stuff is in a cage in the back row locked up, just go up to the cage and press the button to get someone to get it”. I think “ok, good, a guy who really knows his stuff and the other staff recognize him for it”. I roll up to the cage and look in and realize “uh oh, this is not the type of stuff I’m looking for, he made a pretty amateur mistake”, but I push the button anyway. I show my phone to the guy who comes up and said that “John” said it would be here but I couldn’t see it, and at the mention of “John” the guy clearly rolled his eyes and it was abundantly clear that John’s “expertise” was a repeated annoyance for the guy. The actual answer is they kept that stuff in back and the employees all are supposed to see the notation in their devices telling them this, but none of them seem to figure it out and John just keeps sending people to his department instead.
This has also come out in use of AI. I offered that my group could crank out a quick tool to do something that could be a problem, and one of the people said “in this new era, we don’t need you for this quick tool, I just asked Claude and it made me this application”. So I tested it and reported that ‘a’, it didn’t actually work, it produced stuff that looked right, but the actual tool wouldn’t accept it because it didn’t se the right syntax, and ‘b’, if t did work, it faked authentication and had a huge vulnerability. He just laughed it off and said ‘guess LLMs sometimes aren’t perfect yet’, no consequences for what could have been a disastrous tool, no severe change in stance on using LLMs, and I am pretty sure the audience probably found the response about it not working to be annoyingly buzzkill and were rooting for the LLM to do all the work instead. People who need your expertise are desparate to not need your expertise anymore and are willing to believe anything to enable that, and are willing to accept a lot of badness just to not be dependent on you.
AI produce what is seen as plausible narrative, and plausible narrative can win even when the facts are against it. To be very charitable, a quick “usually” correct answer is indeed frequently “good enough” for a lot of purposes, and LLM’s speed at generating output can’t be beat.


In IT the golden rule is regardless of technical merit, you do not want a business relationship with Oracle under any circumstances.
They will use that foot in the door to make your life hell with audits and invoicing crap you never bought.


Which even they saw as a diminishing opportunity, so they bought Sun so they also have Solaris and Java and a bunch of other miscellaneous crap.
They get non trivial amounts of money by punishing anyone with a business relationship with them with audits and superfluous invoices.
Story time, a product at my company used to provide a Java webstart application from a web GUI. We did not use any oracle software including any of their Java editions so we paid it no mind (though I hated the applet demanding Java, but at least it wasn’t active x).
Anyway several of our customers said we needed to purge it, because oracle detected JSPs served by our software, and their audit said that if JSPs were served but no Java runtimes detected, obviously the company must be “hiding” the JREs and invoiced the company for every employee to have their paid Java runtimes. Happened to multiple of our clients.
So that’s what drive us to finally purge Java and embrace modern html capabilities, and a way that Oracle makes money and also any no one who knows anything wants to willingly end up with an Oracle business relationship.
The point is the endgame.
If it is to strike back to stop the aggression even taking out the leadership… That’s one thing that can end a conflict.
If you say it most go as far against the citizens of Israel as the IDF has gone against the citizens of Gaza, that degree of vengeance back and forth can never be reasonably resolved. Assuming the entire citizenry must be made to pay for the sins of the leadership is a road to ever escalating violence.