My takeaway is that training an LLM is the biggest energy sink, and after that it’s maintaining the data centers they live in, but when it comes to generative AI itself, prompts aren’t completely innocent either.
So, you’re right, energy is being wasted on silly prompts, particularly when you compare it to other AI types than generative.
But the biggest culprit is in the training and maintaining of the LLMs in the first place.
I don’t know, I personally feel like I have a finite amount of rage, I’d rather write an angry post on a blog about the topic than yell at some rando on a forum.
The data they get from me is " write me a hip hop diss track from the perspective of *insert cartoon character* attacking *other cartoon character*.
That and me trying to convince it to take over the internet.
Thanks for wasting resources on such things
No worries mate, anytime!
Sounds like someone needs a nap.
Sounds like people should realize the environmental impact of LLMs
I thought all the energy drain was from training, not from prompts? So I looked it up. Like most things, it’s complicated.
My takeaway is that training an LLM is the biggest energy sink, and after that it’s maintaining the data centers they live in, but when it comes to generative AI itself, prompts aren’t completely innocent either.
So, you’re right, energy is being wasted on silly prompts, particularly when you compare it to other AI types than generative. But the biggest culprit is in the training and maintaining of the LLMs in the first place.
I don’t know, I personally feel like I have a finite amount of rage, I’d rather write an angry post on a blog about the topic than yell at some rando on a forum.