• 0 Posts
  • 565 Comments
Joined 3 years ago
cake
Cake day: June 20th, 2023

help-circle

  • My assumption is the bias is unintentional, at least partially, and just the priorities of recommendations is weighed heavily to encourage engagement above all else, and stoking fear and anger drives engagement. Also the distribution of content could be a factor. On the right, it seems like everyone is trying to get in on the grift of advertising elk meat or trump coin to exploit their viewers, meanwhile high quality journalism and news is under funded. For every Climate Town or A More Perfect Union, there’s tens or hundreds of right wing fearmongering videos.





  • This is anecdotical but I moved into an apartment with a 30 year old ionizing smoke detector, and the failure was it was too sensitive, I assume because there were less electrons being emitted from the radioactive element, any faint smoke caused it to go off. Eventually it got into a state where it would always be in an alert state, and was beeping 100% of the time, which was when the landlord finally replaced it.

    My assumption with the 10 year replacement recommendation for Americium based smoke detectors is to replace it before it becomes too sensitive and annoying, because they were worried some people would remove the battery and just live without an active smoke detector.



  • If I were to use an LLM, it would not be to actually upload the PDF and generate the excel document, you’re guaranteed to have made up data if you ask it to do this. What I would do is ask an LLM to write a python script which uses OCR or some other programmatic way to extract the data from the PDF and put it into a CSV to be imported by Excel.

    If the PDF has some sort of data aggregation, like a column for a sum of the data in a row, then do not include that in the CSV output, and have excel do the calculation based on the data the script imported. Then you just have to manually check that the values of that column match the PDF to know if there is any wrong data. Obviously if multiple fields are adjusted by bad OCR but negate each other, the sum column would look accurate while the bad data persists, so some more spot checking or additional aggregation would be needed to ensure confidence with the numbers.













  • It would be nice if the clients could group cross-posts or posts with the same URL, so that you don’t see duplicate content in the feed. Like right now with the posts regarding changes with Discord, there are 3 posts in the top 20 of my Hot feed, two of which that are cross-posted and another with the same press release URL. Everytime I cross post, I feel bad for people who follow both communities since i know i’m cluttering up their feed, so I do so sparingly, but there are other users who seem to be cross post happy.