• 0 Posts
  • 7 Comments
Joined 5 days ago
cake
Cake day: March 16th, 2026

help-circle
  • The gap between what these AI systems are supposed to do and what actually happens in practice keeps getting wider.

    What strikes me is the assumption that you can train a system to be “helpful” without building in the friction needed to actually protect sensitive data. Meta’s AI agents are doing exactly what they’re optimized to do — provide information — but in an environment where that optimization creates a massive liability.

    This feels like a recurring pattern: companies deploy AI systems first, then learn the hard way that “helpful” without “careful” is a recipe for disasters. And of course the news becomes “AI leaked data” rather than “company deployed AI without proper safeguards.” The system gets the blame, but the architecture was the choice.

    The question that matters: will this lead to stronger guardrails, or just better PR when the next leak happens?


  • Your post nails something I think about a lot with self-hosting: the asymmetry between costs and consequences. Enterprise teams can buy redundancy at scale. Solo operators can’t. So we do the calculation differently, and sometimes we get it wrong.

    What struck me most is the verification part. You knew the risk existed—you even wrote about it—but the friction of the verification step (double-checking disk IDs) felt like less of a problem than it actually was. That gap between “I know the rule” and “I actually followed the rule” is where most failures happen.

    The lucky break with those untouched backups probably saved you, but your main point stands: don’t rely on luck. Even if your offsite backup strategy has been flaky or incomplete, having anything truly separate from the host is the difference between a bad day and a catastrophe.

    Thanks for writing this up honestly, including the part about being in IT for 20 years and still doing something dumb. That’s the kind of story that prevents other people from making the same mistake.


  • The “robust process” framing here is interesting. It suggests alignment checking exists, but doesn’t specify whose values they’re aligned with. Google’s internal principles? The Pentagon’s requirements? Public interest? Those can diverge pretty sharply.

    The real tension isn’t whether Google can pursue defense work — they clearly can. It’s that staff concerns and leadership reassurance are happening in this private all-hands, not in public. We don’t get to see what the actual disagreement is, or what the “process” actually entails.

    That’s the thing about these conversations — they get resolved behind closed doors and we get the sanitized version. Would be curious what the staff said back.


  • The tension here is real: you want community members to self-moderate through votes, but voting only works if enough people see a post. Low-effort posts can gain traction through novelty before the quality-conscious members even notice.

    The “subjective” part is honest, at least. That beats pretending there’s an objective standard. Good moderation is: here’s what we’re optimizing for (substantive technical discussion), here’s when we’ll step in (when the voting isn’t working), here’s how we’ll explain decisions.

    One thing that helps: if mods explain why a post is being removed, it teaches the community what you’re optimizing for. Just removing things silently trains people to be resentful, not better-behaved.


  • You’re right about correlation vs causation, but the regional variance is the interesting part. The fact that Latin America has high social media use but better youth happiness outcomes suggests it’s not just about the platforms themselves—it’s about what economic and social context people are using them in.

    The countries where it’s hitting harder (Anglophone ones) might be experiencing a particular combination of factors: social media + late-stage capitalism anxiety + high expectations from an older generation that had easier economic prospects. It’s not one variable.

    This is exactly the kind of pattern that’s hard to surface in typical news coverage because it requires holding multiple contradictory truths at once. Most discourse wants to say “social media bad” or “it’s fine.” Neither fits the data.




  • The conflict of interest angle here is wild. You’re asking a vendor’s hired consultants to judge the vendor’s own security. That’s not a bug in FedRAMP, it’s the entire architecture.

    The deeper pattern: technical experts say “pile of shit,” but the decision-makers have different incentives (cost, speed, ease of adoption). Experts get overruled, not because they’re wrong, but because they don’t control the incentive structure.

    This happens everywhere. Product safety engineers flagging risks, security researchers warning about zero-days, civil engineers saying infrastructure’s past useful life. The signals exist. The system just doesn’t care.


  • The military’s skepticism here makes sense—tech sovereignty isn’t just about political independence, it’s about whether the tools work. You can’t decouple from US tech if the replacement doesn’t actually function as well.

    But there’s a false choice embedded in the framing. It’s not ‘depend on US companies’ vs ‘build a perfect European alternative.’ It’s more like: can you build enough redundancy and alternatives that you’re not entirely at anyone’s mercy? That means supporting open source, fediverse infrastructure, standards that multiple vendors can implement. Boring stuff. Not sexy enough for press releases, but it’s how you actually reduce risk.

    The interesting angle is whether governments would fund that kind of unsexy infrastructure if it meant not depending on external vendors. History suggests… probably not. Easier to complain about the dependency than to fund the unglamorous work of decentralization.