• 1 Post
  • 4 Comments
Joined 18 days ago
cake
Cake day: February 6th, 2026

help-circle
  • This definition of social media is new to me as well, thanks for sharing it. This sort of clarifies a term I really dislike, and which you’ve used: “the algorithm”. It’s always seemed a little murky to me which algorithms it refers to. It’s like saying “don’t eat food with chemicals in it”.

    Lemmy does have “an algorithm”, it’s just a relatively simple one based on communities one is subscribed to plus some vote/comment data for the various sort orderings.

    Lemmy also absolutely implements a social graph – the data about who has interacted with whom is all stored by the system. It’s not explicitly stored as a graph structure, but then we’re arguing database schemas.

    As I understand it, however, you’re saying “social media” arises when the “social graph” data structure is used as an input to “the algorithm”. That seems like a pretty robust definition to me.

    One bit of pedantry: user blocks on Lemmy are, by a general definition, a form of social graph, and they do affect what content people see. So Lemmy could technically qualify as social media by the definition I’ve written here. I’m not sure what a more precise definition could be that avoids this technicality.



  • Interesting, thanks for doing the research!

    As an extreme non-expert, I would say “deliberate removal of a part of a model in order to study the structure of that model” is a somewhat different concept to “intrinsic and inexorable averaging of language by LLM tools as they currently exist”, but they may well involve similar mechanisms, and that may be what the OP is referencing, I don’t know enough of the technical side to say.

    That paper looks pretty interesting in itself; other issues aside, LLMs are really fascinating in the way they build (statistical) representations of language.


  • This is a good name for one of the main reasons I’ve never really felt a desire to have an LLM rephrase/correct/review something I’ve already written. It’s the reason I’ve never used Grammarly, and turned off those infuriating “phrasing” suggestions in Microsoft Word that serve only to turn a perfectly legible sentence into the verbal equivalent of Corporate Memphis.

    I’m not a writer, but lately I often deliberately edit myself less than usual, to stay as far as possible from the semantic “valley floor” along which LLM text tends to flow. It probably makes me sound a bit unhinged at times, but hey at least it’s slightly interesting to read.

    I do wish the article made it clear if this is an existing term (or even phenomenon) among academics, something the author is coining as of this article, or somewhere in between.


    GPT-4o mini, “Rephrase the below text in a neutral tone”:

    This name is appropriate for one key reason: I have not felt the need to use an LLM for rephrasing, correcting, or reviewing my writing. This is also why I have not utilized Grammarly and have disabled the “phrasing” suggestions in Microsoft Word, which often transform a clear sentence into something overly corporate or generic.

    Although I wouldn’t categorize myself as a writer, I have been intentionally editing myself less than usual lately to avoid the typical style associated with LLM-generated text. This approach might come across as unconventional at times, but it can also make for more engaging reading.

    I also wish the article clarified whether this term is already established in academic circles, if the author is introducing it for the first time, or if it falls somewhere in between.

    “avoid the typical style associated with LLM-generated text” – slop!