• 19 Posts
  • 186 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle


  • Its not so much social media that ruined it, as capitalism and centralization.

    Forums themselves are a form of social media, and they’re (mostly) great. For Reddit and Lemmy, debatably the best part is the social elements, like the comments sections. The problem isn’t the interaction or the “social” nature of it. Its that these platforms have turned into psudo-monopolies intent on controlling people and/or wringing them for every penny.

    Thats not to say toxicity and capitalistic exploitation didn’t exist before either. The term “flame war” is older than a lot of adults today. Unlike today though, platforms were both more decentralized meaning they were easier to manage and users could switch platform, and were less alorithmic meaning that users could more easily avoid large, bad-faith actors. You’ll notice the Fediverse have both these qualities, which is part of why its done so well.

    IMO, the best fix to this, would be twofold. A) break up the big monopolies and possibly the psudo-monopolies. Monopolies bad, simple enough. B) Much more difficult, but I believe that what content a site promotes, including algorithmically, should be regulated. Thats not to say sorting algorithms should be banned, but I think we need to regulate how they’re used and implemented. For example, regulations could include things like requiring alternative algorithms be offered to users, banning “black box” algorithms, requiring the algorithns be publicly published, and/or banning algorithms that change based on an individual’s engagement. Ideally, this would give the user more agency over their experience and would reduce the odds of ignorant users being pushed into cult-like rabbit-holes.




  • I went down this rabbit hole about a year ago, and didn’t have much luck. In the end, the best results I was able to get were from Steam’s Big Picture Mode on a Windows device, mostly launching Firefox (might have been Chrome?) with different launch arguments to immitate a smart TV.

    Most available software either doesn’t support Linux well, doesn’t support streaming services and outside software, or doesn’t support non-kb&m input methods. You can get two, but never all three. You could try SteamOS, now that its out, but unfortunately my hopes wouldn’t be high for it to have all the apps you needs functioning.



  • That was actually my biggest disappointment with my degree - the course didn’t teach anywhere near enough for my tastes. However I would hope that I was an outlier in that respect!

    From my own experiences, and those of my own social circles, you’re in the majority and its not even close. I think a lot of schools are both bad at teaching, and failing to account for the changes in the world since the internet. A lot of schools seem to want to stick to the bare minimum without changing methods or content, which unfortunately makes sense (financially), given capitalism and our current culture around schooling.


  • You seem to be missing what I’m saying. Maybe a biological comparison would help:

    An octopus is extrmely smart, moreso than even most mammels. It can solve basic logic puzzles, learn and navigate complex spaces, and plan and execute different and adaptive stratgies to humt prey. In spite of this, it can’t talk or write. No matter what you do, training it, trying to teach it, or even trying to develop an octopus specific language, it will not be able to understand language. This isn’t because the octopus isn’t smart, its because its evolved for the purpose of hunting food and hiding from predators. Its brain has developed to understand how physics works and how to recognize patterns, but it just doesn’t have the ability to understand how to socialize, and nothing can change that short of rewiring its brain. Hand it a letter and it’ll try and catch fish with it rather than even considering trying to read it.

    AI is almost the reverse of this. An LLM has “evolved” (been trained) to write stuff that sounds good, but has little emphasis on understanding what it writes. The “understanding” is more about patterns in writting rather than underlying logic. This means that if the LLM encounters something that isn’t standard language, it will “flail” and start trying to apply what it knows, regardless of how well it applies. In the chess example, this might be, for example, just trying to respond with the most common move, regardless of if it can be played. Ultimately, no matter what you input into it, an LLM is trying to find and replicate patterns in language, not underlying logic.


  • The LLM doesn’t have to imagine a board, if you feed it the rules of chess and the dimensions of the board it should be able to “play in its head”.

    That assumes it knows how to play chess. It doesn’t. It know how to have a passable conversation. Asking it to play chess is like putting bread into a blender and being confused when it doesn’t toast.

    But human working memory is shit compared to virtually every other animal. This and processing speed is supposed to be AI’s main draw.

    Processing speed and memory in the context of writing. Give it a bunch of chess boards or chess notation and it has no idea which it needs to remember, nonetheless where/how to move. If you want an AI to play chess, you train it on chess gameplay, not books and Reddit comments. AI isn’t a general use tool.



  • PlzGivHugs@sh.itjust.workstomemes@lemmy.worldSoon
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    24 days ago

    Tomska comes to mind as a pretty hilarious example - not just because he turns them into skits, thats normal enough. He had a whole saga trying to figure out how far he could push the boundries of the VPN company sponsoring him before they would start intervening. It started off simple enough, with the South Park philosophy of “Add provocative stuff so they cut that, rather than the jokes we like.” Rather than editting they script, the approved it as is. He thought it was funny, and took that as a challenge. After increasingly crass and violent ads (on-brand for him, and with appropriate content warnings) eventually ended up going so far as to include an ad that even he considers way too far. Said ad later had to be editted out of the video it was included in. In my opinion, despite obviously being very all ads, its collectively some of the funniest content hes made.

    He’s his videos recapping the saga:

    links

    Dear Surfshark, Please Fire Me

    Dear Surfshark, Please Forgive Me







  • At least from my layman’s understanding, (I am not a lawer):

    If you have legal access to the work being used for training, and no other terms in licencing restrict its use, using it for training is currently not inherently considered copyright infringement. That said, if your copy is used to refrence or recreate the character being copied, it would infringe on the copyright for the character. So legally, from my unprofessional understanding, you can make an AI voice clone, as long you don’t try to replicate the character with it. This may be further regulated in some regions, but to my knowledge, most don’t have anything specific in law yet.

    On the other hand, morally…


  • I think two main things need to happen: increased transparency from AI companies, and limits on use of training data.

    In regards to transparency, a lot of current AI companies hide information about how their models are designed, produced, weighted and use. This causes, in my opinion, many of the worst effects of current AI. Lack of transparency around training methods mean we don’t know how much power AI training uses. Lack of transparency in training data makes it easier for the companies to hide their piracy. Lack of transparency in weighting and use means that many of the big AI companies can abuse their position to push agendas, such as Elon Musk’s manipulation of Grok, and the CCP’s use of DeepSeek. Hell, if issues like these were more visible, its entirely possible AI companies wouldn’t have as much investment, and thus power as they do now.

    In terms of limits on training data, I think a lot of the backlash to it is over-exaggerated. AI basically takes sources and averages them. While there is little creativity, the work is derivative and bland, not a direct copy. That said, if the works used for training were pirated, as many were, there obviously needs to be action taken. Similarly, there needs to be some way for artists to protect or sell their work. From my understanding, they technically have the legal means to do so, but as it stands, enforcement is effectively impossible and non-existant.