![](https://lemmy.dbzer0.com/pictrs/image/5765e0a4-8f36-4804-afff-118c5f708a61.jpeg)
![](https://lemmy.ml/pictrs/image/u9kB0kgEaN.png)
No? If it’s anonymized to “someone somewhere clicked this ad” that’s not possible to de-anonymize.
Do I expect it to be that anonymized? No. But the idea that it is always possible to de-anonymize data is just plum wrong.
No? If it’s anonymized to “someone somewhere clicked this ad” that’s not possible to de-anonymize.
Do I expect it to be that anonymized? No. But the idea that it is always possible to de-anonymize data is just plum wrong.
This sounds great, but it’s one of those things that is infinitely easier to say than do. You’re essentially asking for one of two things: Manual human intervention for every single image uploaded, or “the perfect image recognition system.” And honestly, the first is fraught with its own issues, and the second does not exist.
??? Your original proposed solution is literally a bandaid fix.
Advertising? This thing is essentially a theater. Yeah, it can run advertisement but anything with a screen can do that. It’s like saying a movie theatre is for advertising.
Free to access != free to redistribute
You really don’t see a difference between a single user citing wiki, vs a billion dollar company going to great lengths to squeeze every cent they can out of everyone involved when looking up information?
The same reason we don’t let companies sell photocopies of books? This isn’t a take on piracy, to be clear. This is a take on one company stealing content from another, and serving it up as if it were their own. And when Google has a monopoly on search, that fucks over everyone but Google, including you.
Those kinds of things are what people often take issue with Google about. Well, the second one anyway. The first is arguably not a search and is instead a calculation, but I admit that’s a little semantical.
The first however, is Google taking information provided by third parties, and presenting it to the user. It prevents traffic from flowing through to the original site, and is something actively complained about.
Wingdings is a font so… it already is.
Not all encryption is vulnerable to quantum.
Why did you say “everyone has a target they are fine with genocide about” then justify it with a lack of protests, and protests not going anything?
You clearly stated something as fact, then went beyond moving the goalpost, playing a completely different game with your justification.
I can’t think of anyone in my communication circle that would ever shrug off genocide. Virtually everyone not taking part in genocide agrees it’s wrong, and anyone trying to justify it or saying “everyone is fine with it to some degree” is extremely suspect.
Yeah, I’ve been looking for this as well. I’ve got a constant check engine light because the catalytic converter is dead, and that’s about the only thing on the car I don’t care about.
Are you having a stroke, or am I missing a reference?
Just different allowed targets.
I’m sorry, I can’t get past that first paragraph. What “targets” do you think most people are okay with genociding? The fact that you think everyone has a group they’d be fine with wiping off the face of the earth completely is extremely concerning.
Have to use? No one has to use any library. It’s convenience, and in this case it’s literally so they don’t have to write code for older browser versions.
The issue here isn’t that anyone has to use it, it’s the way it was used that is the problem. Directly linking to the current version of the code hosted by a third party instead of hosting a copy yourself.
LLMs aren’t a scam, I don’t even understand how you could twist it into such. While something like NFTs have no real legitimate use case, LLMs excel at translation and as an advanced form of spelling and grammar checking.
Your complaint seems to boil down to “it doesn’t work in all use cases it’s being used” which is fair enough, but if I put a car on my bed and try to use it as a blanket… does that make it a scam?
Why are you explicitly picking those examples, and not things like IoT, DevOps and Edge computing, all buzzwords, all successful and still in general existence today?
You’re cherry picking failed buzzwords and using them as proof that “AI” will fail.
To be clear, I agree that LLMs are bullshit for 95% of applications they are being put into. But at least argue in good faith.
Using the comments from Lemmy is clearly a case of selection bias. It would be like running a poll at a gym to see how many people think exercise is important. Or asking lemmy users if Linux is better than Windows. “The people I hang around have the same opinion as me” isn’t really a good litmus test for “does this actually represent public opinion.”
I highly doubt they have one team that switches between experiments and bug fixes, never doing two things at once. Not to mention that something ultimately being ripped out isn’t necessarily wasted effort. They could likely easily pivot virtually anything they put into this specific experiment into any number of other uses.
From reading the learn more link, it’s meant to just give them info on what ads worked. They would absolutely want this info, even if it was just “the ad you ran last week resulted in a dozen sales.”
Why would you think otherwise?