

I add things to an online pickup order as I think of them, I don’t actually go in the store anymore


I add things to an online pickup order as I think of them, I don’t actually go in the store anymore


Getting harder to afford the setup, but there’s very compelling reasons to use local models instead


It’s not fixed, I also get this problem atm
So then we would tell the alien we use base 21?


Sounds a lot like “how you feel doesn’t matter, your right to exist depends on being useful to me.”
Which calls for acquiring leverage and using it to set boundaries, more than it calls for a rational rebuttal. Just gotta systematically remove the power such people have over you, and then they won’t be able to talk to you that way anymore.


I see the rockets in its missile battery have red tips, so better be careful with it


Mobile phones have caused a dark age of UI design


Not really, I always strongly disliked Twitter and the idea of something that’s basically like Twitter never appealed to me. Might try that stuff eventually though.


I don’t want to make accounts on lots of sites and search all of them every time I buy something, so I think of it like a convenience fee if the way an ebay package is wrapped implies the seller arbitraged it from elsewhere.


I don’t know how much has changed since I was doing it, but the main trick was to get the tasks associated with academic studies that were typically higher paid, by using a sniping bot to grab it before others could. So that way you get paid around minimum wage instead of a small fraction of that (if you also cheat by doing multiple tasks simultaneously and hiding it). Though tbh the situation is probably worse now since there have been all those funding cuts to academia.


i hate to break it to you but Discord the company is sending everything that goes through all servers and all private DMs through LLMs: this is done as a part of their trust and safety system. it’s right in the privacy policy that they use OpenAI
This is a good argument, but more for not using Discord than it not mattering if they put in a chatbot nobody wants.


Bloat is a valid concern but imo a lesser one compared to the potential for centralized data harvesting.
I looked up some stuff about Argentina’s financial crisis since you mentioned it before, and it looks like they actually did something a bit like what I’m talking about, directly appropriating the valuable assets they could in an effort to keep being able to function:
In addition to the corralito, the Ministry of Economy dictated the pesificación; all bank accounts denominated in dollars would be converted to pesos at an official rate. Deposits would be converted at 1.40 ARS per dollar and debt was converted on 1 to 1 basis.[69]
There’s some indication that this also applied to financial products:
As noted above, a number of U.S. investors have filed ICSID arbitration claims against the government of Argentina. Most of these investors consider the January 2002 pesification of dollar-denominated contracts, and/or the ex post facto prohibition on contracts linked to foreign inflation indices, to be an effective expropriation of their investments
I can’t specifically confirm this included gold held on paper, but I think it probably would have.
As for the plausibility of this sort of thing happening in the US, in addition to the actions of Roosevelt mentioned by @diablexical@sh.itjust.works, the main trigger for Nixon abandoning the gold convertibility of US dollars was France attempting to physically withdraw the gold they had stored in US banks, which they didn’t want to allow.
I think what they’re saying is that in a hyperinflation scenario, it is an option for the government to seize the physical gold backing the financial products people hold in order to continue paying to run the government now that fiat is worthless and they are having trouble with that.
Gold you have buried in your basement, they will have to work a little harder to get.


the developers write that “our studio was mistakenly accused of using AI-generated art in our games, and every attempt to clarify our work only escalated the situation”. They say they’ve received a lot of insults and threats as a consequence.
This is a bad thing.
Same, traveling around the holidays is awful


AI witch hunt strikes again


I don’t hate this article, but I’d rather have read a blog post grounded in the author’s personal experience engaging with a personalized AI assistant. She clearly has her own opinions about how they should work, but instead of being about that there’s this attempt to make it sound like there’s a lot of objective certainty to it that falls flat because of failing to draw a strong connection.
Like this part:
Research in cognitive and developmental psychology shows that stepping outside one’s comfort zone is essential for growth, resilience, and adaptation. Yet, infinite-memory LLM systems, much like personalization algorithms, are engineered explicitly for comfort. They wrap users in a cocoon of sameness by continuously repeating familiar conversational patterns, reinforcing existing user preferences and biases, and avoiding content or ideas that might challenge or discomfort the user.
While this engineered comfort may boost short-term satisfaction, its long-term effects are troubling. It replaces the discomfort necessary for cognitive growth with repetitive familiarity, effectively transforming your cognitive gym into a lazy river. Rather than stretching cognitive and emotional capacities, infinite-memory systems risk stagnating them, creating a psychological landscape devoid of intellectual curiosity and resilience.
So, how do we break free from this? If the risks of infinite memory are clear, the path forward must be just as intentional.
Some hard evidence that stepping out of your comfort zone is good, but not really any that preventing stepping out of their comfort zone is in practice the effect that “infinite memory” features of personal AI assistants has on people, just rhetorical speculation.
Which is a shame because how that affects people is pretty interesting to me. The idea of using a LLM with these features always freaked me out a bit and I quit using ChatGPT before they were implemented, but I want to know how it’s going for the people that didn’t, and who use it for stuff like the given example of picking a restaurant to eat at.


There’s at least some difference between “have been” and “this is currently likely to happen”, since if it’s known then it would have been fixed. I’ve gotten viruses before from just visiting websites but it was decades ago and there’s no way the same method would work now.
VPN and domains