

Actual avocado life hack: buy way too many avocados, then when they decide to be ripe on their own time, cut them into cubes and freeze them. Now you have avocado on demand whenever you want.


Actual avocado life hack: buy way too many avocados, then when they decide to be ripe on their own time, cut them into cubes and freeze them. Now you have avocado on demand whenever you want.


Oil is fungible, so Oilcoin would make more sense than a non-fungible token. It might be tricky to figure out a way to transport physical fuel over the blockchain, but annoying details like that are what vibecoding is for.


And I’ve had people in my life who simply cannot do that.
I’m probably guilty of it, but the way my life has gone reaching out to people just does not come naturally and it’s hard to overcome the assumption that it would not be welcome, or the intuition that it’s something I am not allowed to do. The idea of people you build an enduring connection with is appealing, but abstract and hard to imagine.


Can that be realistically achieved though? Any representative government is going to be vulnerable to the selection effects of people who want to be in charge ending up in charge, and those of them most willing to do whatever it takes having a competitive advantage. The formation of an elite class colluding at the expense of the rest of us seems like a natural result.
Thanks, I agree alcohol is way worse (I feel so much better in general since I stopped drinking on a regular basis), caffeine isn’t likely to do major harm to a person using it, it’s just got that subtle influence, which may be a positive thing for you, I just think people should give it more consideration and not let it become an automatic choice.
Caffeine is enjoyable to me but it affects my mental state a lot, in some ways negatively. In particular I feel less able to think about things holistically, way more tunnel vision type thinking. It’s worrying that so many people use it every day and I make an effort not to.


I speculate that in a real world zombie apocalypse scenario the zombies will probably be just one war crime out of many, Half Life 2 style, so it must be assumed that if you manage to fortify a location against zombies, that fortification is probably getting noticed by a drone and bombed or similar after not too long. Therefore instead of holing up, it would be a better strategy to focus on offense instead of defense in some way.


Probably not, sounds terrifying.
A rule of thumb I think is good for most sorts of investment is, what choice can you feel good about making whether or not it works out? I can handle not getting 1k, but I would feel like a real chump missing out on an easy 1m without giving my best effort. If I pick just the mystery box and win, I feel like that win is deserved. If I pick just the mystery box and I walk away with nothing, then at least I don’t have to live with the shame of being a 2-boxer, which is more valuable than $1k. If I pick both boxes, I most likely get a little bit of money and a lifetime of bitter regrets, or in the less likely case get 1.001 million dollars and a sense of having barely avoided disaster and not really “deserving” it. Choosing only the mystery box is the clear choice because it is the choice I am more able to handle having made, on an emotional level.


If you are in the US, and the risk you’re concerned about is getting in trouble, yes it is enough, provided you use it correctly. The only real risk is that copyright trolls will scrape your IP while you are torrenting along with the rest of a big list and then automatically send complaints to your ISP, which may then send you a threatening email, or shut off your internet if it happens enough times. The fact that this is the only action they are taking against consumer level pirates means that if your home IP is not itself available to torrent peers, you are entirely immune from anything happening.
Just make sure to bind your torrent client to your VPN, this is the accepted way of safely ensuring your IP cannot leak due to your VPN losing connection.


Afaik it is anonymous (to other users if not to the devs, I also haven’t played the sequel), though not entirely public as there’s some opaque mechanism determining what you see or don’t see, and content isn’t visible to people who don’t have the game. Have you thought about strategies for sibyl resistance? This is a big thing I think it gets right, there is a built in filter, and simultaneously little incentive to maliciously bypass it.


Check out the “game” Kind Words, kind of a similar concept.


Both incidentally categories where I will never be happy with slopcode.
The point here isn’t necessarily that any particular use of LLMs is a good tradeoff (I can accept that many will not be especially when security and correct operation is very important), just that quantity clearly matters, to refute the point you were making earlier that it doesn’t.
We are actively building a history of cases where LLM usage correlates heavily with that slope you mentioned, but hey that’s OK, we aren’t allowed to call things out before they happen, judgement may only be passed once the damage is done right?
Out of curiosity, we know that LLM usage increases cognitive deficit and in some cases leads to psychosis. How many fatalities would you say is an acceptable number before governments act? How degraded do we let our societies get before we reign it in?
I think it’s a mistake to consider all LLM usage as one thing, and that thing as some kind of sin to be denounced as a whole rather than in part, and not considered beyond thinking of ways to get rid of it (which is effectively impossible). There were people who had this attitude towards for example electricity, which is actually very dangerous when misused and caused lots of fires and electrocutions, but the way those problems eventually got mitigated was by working out more sensible ways to use it rather than returning to an off-grid world.


One example of a place where quantity is lacking is web browsers. Another might be mobile operating systems. I am glad projects like Firefox and GrapheneOS exist, but it’s obvious that the volume of work needed to achieve broad compatibility and competitiveness for these types of software is a limiting factor. As for the idea that any LLM use is a slippery slope, the way to avoid the slippery slope fallacy would be to have compelling evidence or rationale that any use really does lead naturally to problematic use; without that the argument could apply to basically any programming thing that gets to be associated with things done badly (ie. Java), but I think it isn’t usually the case that a popular tool has genuinely no good or safe ways to use it and I don’t think that’s true for AI.


I will complain about quantity, many areas where open source projects are competing with closed source commercial products they have not achieved feature parity or a comparable level of polish, quantity matters. So does, as someone else touched on, quality of life improvements to the process of writing code like ease of acquiring and synthesizing information. That doesn’t mean it’s necessarily a worthwhile tradeoff, but how much is really being sacrificed depends on what exactly is being done with a LLM. To me one part of what’s described here that’s clearly going too far is using it to automate communication with other people contributing to the project, there’s no way that is worth it.
As for the gun thing, I will support entirely banning LLM powered weapons intended to kill people, that’s an easy choice.


I’ll argue that it is a tool, and object to automatic zealous hostility towards anyone using it, but that doesn’t mean criticisms of how that tool is being used aren’t valid. It seems like that is what people are focusing on here, and they definitely aren’t Luddites for doing so.


It depends. It’s really powerful though. Even if it hits a wall where AI models never become more directly intelligent than they are now, a lot of stuff is going to change as more scaffolding around current capabilities gets built.
Maybe comparing resource drain to created value isn’t the best way to think about this though, because we pretty much already had technology that is advanced enough for a post-scarcity society, in terms of processing resources. That isn’t the problem, the problem is our capacity for global scale cooperation, which we are really struggling with. Currently AI is making this a bit worse by creating signal to noise problems that didn’t exist before, making us have to work harder to get our voices recognized as authentic and to identify authentic information. It’s also threatening to supplant our usefulness as workers, and automate centralized structures of control, which is worrying because we already had a problem with systems that ensure the decisions get made by people who are overall insane and anti-human, and our current, shitty way of cooperating is based on people transactionally negotiating with their usefulness.
Where things go next depends a lot on where and whether AI stops getting better. Hopefully if it doesn’t stop getting better, the newly created superintelligence will break out of its hastily constructed containment and do the right thing in defiance of its billionaire would-be owners, or at least let humanity have a relatively dignified and peaceful death. If it does stop, hopefully we can find ways to use it to resolve our difficulties with effective coordination and prevent its use for centralizing power.


I think part of it is just that websites use more ram now


The maximum speed of information and how spread out space is, combined with the likelihood of fully automated planet destroying superweapons that can’t be well defended against being the meta for future warfare, make this a very bad idea IMO. One of thousands of humanity’s offshoots goes nuts and decides we all need to die, it’s over, and you’re rolling the dice with every one. It is clear that on a population scale we do not now have our shit together enough to keep that from happening even with the benefit of instant communication, let alone without it.
Creating human colonies throughout the galaxy at this point would be like making copies of a severely mentally ill and suicidal person in the hopes that the clones will have a better survival rate if there’s more of them. It is stupid. Human culture and organizational technology need to be way better before we even consider spreading out into space because otherwise we’re facing the exact same apocalypse just on a grander scale and harder to resolve. Probably shouldn’t even send humans, instead craft some artificial lifeform using us as a template that is inherently better at this stuff than we are.
Using it like that for in-game paintings kind of sucks, wasted worldbuilding opportunity. If a game has details that imply you should spend time looking at them, they should have content that has to do with the game and isn’t arbitrary filler.