I’ll normally tell my managers etc. when they have an idea that would not work well in the real world. However, many people seem like they have an almost theistic belief in the power of AI (maybe because they’re deeply invested) so I’m holding my tongue at work.
For the time being I’m doing my job properly but if I’m forced to do things a certain way even if it’s clearly worse I will comply and let them waste their money. I’m tired and I won’t police decisions above my pay grade.
Still biding my time and waiting for the bubble to pop and the next buzzword trend to arrive.
sunk cost fallacy let’s gooo
I’m soon going to be forced to use an internal ai tool at my job. I’m not looking forward to it. These LLM’s can be helpful in some use cases but it feels like they’re some kind of buzzword magic element to make people think there’s extra value
It’s like when everything was “artisanal” for a while, and stores were attempting to charge more money because they put the word in the title of an item. It feels like I’m taking crazy pills watching everyone attempt to grift.
„Do not use your brain, ask the holy brainbot and it tells you what to think“
I’m so over this AI nonsense. I use AI as a barometer at work, in that the more they talk about AI, the more I know they are adding fuck all to the organization and aren’t actually doing any work.
sounds like the kind of policy an AI would make 🤔
one can probably automate the AI tools while you do something else.
AI at Microsoft has not been optional for months. Someone I know has a job at Microsoft and has described it as hell. No human guidance or training, when asking questions of higher-ups they’re told to “ask Copilot”, Copilot is used for all internal documentation such as employee records and user tickets so when they need to find someone’s department or answer end-user questions they have to ask Copilot. They said it’s like having a full on conversation every time they need a small bit of information that a spreadsheet or database could accomplish much quicker.
AI at Microsoft has not been optional for months.
This is org-specific and role-specific, but it’s becoming more and more pushed onto people (as is evident by this article).
when they need to find someone’s department
This information is both present in most internal communication tools (org chart), and in the internal directory. Hopefully your friend found it.
Everything else sounds horrible, and I hope your friend is doing better now.
(Sauce: “I know a guy”)
A non-zero number of employees scripted random daily prompts to maintain LLM usage stars.
Automation meets ersatz automation
oh, that’s genius.
“Your performance has dropped in the last quarter. Are you sure you’re making use of the AI tools?” “Its because I’m using the AI tools. Its re-implemented an HTTP request library at least 500 different times in 500 different ways”
and none of them works
“And it always replies with 500 status codes”
Nadella has lost his mind
Nadella is perfectly sane and doing the most logical thing given his set of priorities and incentives. He’s payed insane amounts of money to push initiatives and pursue goals at the behest of the board, the board is attempting to maximize shareholder value, the value of stock is largely determined by perceptions of theoretical future growth.
The reality is irrelevant, the narrative and adhering to the party line is what matters.
Insert payed-paid bot here telling you payed is for boats and paid for transactions.
I don’t know if this is relevant, but I deal a lot with MS support in my role at work. There is some built in AI that you need to go through in order to open a support case, and it’s about 85% useless, and also has a caveat at the end that says “AI provided information may be incorrect”. However, there are cases where it’s useful, and I don’t have to deal with a human who copies and pastes a novel in their email to tell me they’re working on it.
Ive had cancellations processed after product upgrades and it went through in less than a day, and I didn’t have to read 5 paragraphs of bullshit canned script from a human. So it’s not all bad, I guess.
That’s the extent to which I use AI for work. I can’t imagine another use case where I wouldn’t have to double check everything, and if I have to do that, I may as well just do it myeself.
The next AI winter is going to be a doozy that’s for sure.
Can the tech industry ever do a thing in moderation?
They have to keep pushing on this because they’re all invested up to their necks. The allure of AI is that it offers to replace all human labor for a fraction of the cost, but AI only knows what it scrapes, and the models are starting to poison each other because the net is increasingly flooded with AI bullshit. When it fails, I think the entire tech sector’s going to implode.
It’s not even really cheaper. Especially for Microsoft who is actually footing the bill to run all the data centers.
But, the potential benefit lies in the fact that it’s a potential labor substitute that can’t unionize, can be rapidly switched between different skill sets, won’t quit, won’t ask for raises, and won’t protest when you ask it to participate in DOD contracts. The labor that goes in to making it work is constant, uniform, alienated from the actual outputs of the system, and easily replaced if they start causing problems.
Want more capacity at the company? Build another data center. Need to pivot company priories to the latest fad? Just reduce token allocation form one department to another, no need to fire a bunch of people and wade through that legal mess, then wade through the mire of hiring a bunch of new people from a limited talent pool. Not using all the data center capacity? rent out the remainder to other companies.
It reduces the complex and intricate system of a company to a simple resource allocation that can be wielded at will by company leadership.
Why don’t we call the models poisoning each other incest?
So that we can discuss the issue without triggering abuse survivors would be a reason.
Here’s what’s gonna happen: MS is gonna fuck around with the AI to force their desired stack ranking results.