I don’t have a problem with LLMs as much as the way people use them. My boss has offloaded all of his thinking to LLMs to the point he can’t fix a sentence in a slide deck without using an LLM.
It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
This is a big point. People need to understand that the LLMs are more like a fancy graphing calculator; they are very good and handle multiple things, but its on you to understand why the calculation is meaningful. At a certain point no one wants to see your long division or factorial. We want the results and for students and professionals to focus on the concept.
I get the metaphor but it’s not a great one for AI in mathematics especially. A statistical word generator is not going to perform reliable math and woe to anyone who acts otherwise.
I would call it an autistic sycophantic savant with brain damage. It’s able to perform apparent miraculous feats of memory and creativity but then be unable to tell reality from fiction, to tell if even the simplest response is valid, and likely will lie about it to make itself seem more competent to please you.
If you have a use for an assistant like that, then great. But a calculator - simple and cheap and reliable - it definitely is not.
It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
That seems to general. Im a mobile developer and sometimes I need a simple script outside my knowledge area. I needed to scrape a website recently, not for anything serious, but to save me time. Claude wrote it and it works. Its probably trash code, but it works and it helped. But you wouldn’t want me using Claude to do important work outside my specific area of focus either or im sure Id cause problems.
I’m also a mobile app dev and at my workplace they’re having non-mobile devs submit code to my codebases totally vibed with no understanding behind it. It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.
So yes basically you’re right. If people only used it to learn and do initial code review passes and other reasonable things we’d be totally fine. But that’s unfortunately not the reality 🙈
It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.
The next step is, CEO, look at how good these non-mobile devs are, they’re submitting 10x the commits to the mobile repo than boraginoru our mobile dev! We should fire him and just let the backend devs keep vibe coding it!
I’m talking about people that are accountants that now thing they can create software. Or engineers who think they can now write legal briefs for court.
Partly a marketing issue.
Companies keep advertising their new AI’s as destroyers of worlds, and something that’s too dangerous to even release.
As with anything else, the average user will not have but the most surface level understanding of the tool
AI is here, another tool to use…the correct way. Very reasonable approach from Torvalds.
I don’t have a problem with LLMs as much as the way people use them. My boss has offloaded all of his thinking to LLMs to the point he can’t fix a sentence in a slide deck without using an LLM.
It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
This is a big point. People need to understand that the LLMs are more like a fancy graphing calculator; they are very good and handle multiple things, but its on you to understand why the calculation is meaningful. At a certain point no one wants to see your long division or factorial. We want the results and for students and professionals to focus on the concept.
I get the metaphor but it’s not a great one for AI in mathematics especially. A statistical word generator is not going to perform reliable math and woe to anyone who acts otherwise.
I would call it an autistic sycophantic savant with brain damage. It’s able to perform apparent miraculous feats of memory and creativity but then be unable to tell reality from fiction, to tell if even the simplest response is valid, and likely will lie about it to make itself seem more competent to please you.
If you have a use for an assistant like that, then great. But a calculator - simple and cheap and reliable - it definitely is not.
That seems to general. Im a mobile developer and sometimes I need a simple script outside my knowledge area. I needed to scrape a website recently, not for anything serious, but to save me time. Claude wrote it and it works. Its probably trash code, but it works and it helped. But you wouldn’t want me using Claude to do important work outside my specific area of focus either or im sure Id cause problems.
I’m also a mobile app dev and at my workplace they’re having non-mobile devs submit code to my codebases totally vibed with no understanding behind it. It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.
So yes basically you’re right. If people only used it to learn and do initial code review passes and other reasonable things we’d be totally fine. But that’s unfortunately not the reality 🙈
The next step is, CEO, look at how good these non-mobile devs are, they’re submitting 10x the commits to the mobile repo than boraginoru our mobile dev! We should fire him and just let the backend devs keep vibe coding it!
I’m talking about people that are accountants that now thing they can create software. Or engineers who think they can now write legal briefs for court.
Very frustrating for sure. Like any tool, it’s up to humans to know when the tool is useful.
Partly a marketing issue.
Companies keep advertising their new AI’s as destroyers of worlds, and something that’s too dangerous to even release.
As with anything else, the average user will not have but the most surface level understanding of the tool
Clickbait got me. No mention of “Yes copilot” which I assumed was a joke anyway.
👆🏻true