• 3 Posts
  • 539 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle
  • A rule of thumb I think is good for most sorts of investment is, what choice can you feel good about making whether or not it works out? I can handle not getting 1k, but I would feel like a real chump missing out on an easy 1m without giving my best effort. If I pick just the mystery box and win, I feel like that win is deserved. If I pick just the mystery box and I walk away with nothing, then at least I don’t have to live with the shame of being a 2-boxer, which is more valuable than $1k. If I pick both boxes, I most likely get a little bit of money and a lifetime of bitter regrets, or in the less likely case get 1.001 million dollars and a sense of having barely avoided disaster and not really “deserving” it. Choosing only the mystery box is the clear choice because it is the choice I am more able to handle having made, on an emotional level.





  • Both incidentally categories where I will never be happy with slopcode.

    The point here isn’t necessarily that any particular use of LLMs is a good tradeoff (I can accept that many will not be especially when security and correct operation is very important), just that quantity clearly matters, to refute the point you were making earlier that it doesn’t.

    We are actively building a history of cases where LLM usage correlates heavily with that slope you mentioned, but hey that’s OK, we aren’t allowed to call things out before they happen, judgement may only be passed once the damage is done right?

    Out of curiosity, we know that LLM usage increases cognitive deficit and in some cases leads to psychosis. How many fatalities would you say is an acceptable number before governments act? How degraded do we let our societies get before we reign it in?

    I think it’s a mistake to consider all LLM usage as one thing, and that thing as some kind of sin to be denounced as a whole rather than in part, and not considered beyond thinking of ways to get rid of it (which is effectively impossible). There were people who had this attitude towards for example electricity, which is actually very dangerous when misused and caused lots of fires and electrocutions, but the way those problems eventually got mitigated was by working out more sensible ways to use it rather than returning to an off-grid world.


  • One example of a place where quantity is lacking is web browsers. Another might be mobile operating systems. I am glad projects like Firefox and GrapheneOS exist, but it’s obvious that the volume of work needed to achieve broad compatibility and competitiveness for these types of software is a limiting factor. As for the idea that any LLM use is a slippery slope, the way to avoid the slippery slope fallacy would be to have compelling evidence or rationale that any use really does lead naturally to problematic use; without that the argument could apply to basically any programming thing that gets to be associated with things done badly (ie. Java), but I think it isn’t usually the case that a popular tool has genuinely no good or safe ways to use it and I don’t think that’s true for AI.


  • I will complain about quantity, many areas where open source projects are competing with closed source commercial products they have not achieved feature parity or a comparable level of polish, quantity matters. So does, as someone else touched on, quality of life improvements to the process of writing code like ease of acquiring and synthesizing information. That doesn’t mean it’s necessarily a worthwhile tradeoff, but how much is really being sacrificed depends on what exactly is being done with a LLM. To me one part of what’s described here that’s clearly going too far is using it to automate communication with other people contributing to the project, there’s no way that is worth it.

    As for the gun thing, I will support entirely banning LLM powered weapons intended to kill people, that’s an easy choice.



  • It depends. It’s really powerful though. Even if it hits a wall where AI models never become more directly intelligent than they are now, a lot of stuff is going to change as more scaffolding around current capabilities gets built.

    Maybe comparing resource drain to created value isn’t the best way to think about this though, because we pretty much already had technology that is advanced enough for a post-scarcity society, in terms of processing resources. That isn’t the problem, the problem is our capacity for global scale cooperation, which we are really struggling with. Currently AI is making this a bit worse by creating signal to noise problems that didn’t exist before, making us have to work harder to get our voices recognized as authentic and to identify authentic information. It’s also threatening to supplant our usefulness as workers, and automate centralized structures of control, which is worrying because we already had a problem with systems that ensure the decisions get made by people who are overall insane and anti-human, and our current, shitty way of cooperating is based on people transactionally negotiating with their usefulness.

    Where things go next depends a lot on where and whether AI stops getting better. Hopefully if it doesn’t stop getting better, the newly created superintelligence will break out of its hastily constructed containment and do the right thing in defiance of its billionaire would-be owners, or at least let humanity have a relatively dignified and peaceful death. If it does stop, hopefully we can find ways to use it to resolve our difficulties with effective coordination and prevent its use for centralizing power.



  • The maximum speed of information and how spread out space is, combined with the likelihood of fully automated planet destroying superweapons that can’t be well defended against being the meta for future warfare, make this a very bad idea IMO. One of thousands of humanity’s offshoots goes nuts and decides we all need to die, it’s over, and you’re rolling the dice with every one. It is clear that on a population scale we do not now have our shit together enough to keep that from happening even with the benefit of instant communication, let alone without it.

    Creating human colonies throughout the galaxy at this point would be like making copies of a severely mentally ill and suicidal person in the hopes that the clones will have a better survival rate if there’s more of them. It is stupid. Human culture and organizational technology need to be way better before we even consider spreading out into space because otherwise we’re facing the exact same apocalypse just on a grander scale and harder to resolve. Probably shouldn’t even send humans, instead craft some artificial lifeform using us as a template that is inherently better at this stuff than we are.










  • Not sure what your point is, do you not like how I worded that? I’m saying it’s a bad thing, do you think it’s a good thing, or missed the second half of the sentence? Not using AI to write comments is something I take pretty seriously, so please don’t cast doubt on its humanity just because what I write is long and verbose and not in complete agreement with you, I am a real person who has put effort into laying out my thoughts and this hurts my feelings.

    If your point is further restrictions to children’s access to social media being broadly unpopular, unfortunately that isn’t accurate. This is why I’m taking a contrarian position here despite believing free computing should take priority; if people want this, and it’s going to happen in some form, maybe a compromise that doesn’t involve the worst losses of privacy and control is the best available path forward. If not, I want to hear arguments why not, or alternative plans, because the ones I can think of aren’t totally convincing.