• GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    ·
    9 hours ago

    oh sure, when they fuck up DNS it’s a “race condition”.

    when I fuck up DNS it’s a “fireable offense”.

      • MelodiousFunk@slrpnk.net
        link
        fedilink
        English
        arrow-up
        14
        ·
        11 hours ago

        If I had a nickel for every time clearing the ARP tables fixed a problem, I’d have a shitload of nickels.

          • MelodiousFunk@slrpnk.net
            link
            fedilink
            English
            arrow-up
            14
            ·
            10 hours ago

            These things happen when a skinflint company contracts out network setup for a decade, gets acquired by another skinflint company who axes the contractors and doesn’t hire on-site network personnel, gradually builds out infra on top of the unsupported foundation, and then hires c suite buddies who want to bring in their own people to further muddy the waters.

        • ramble81@lemmy.zip
          link
          fedilink
          English
          arrow-up
          11
          ·
          10 hours ago

          Oh man. One of my old companies, the Devs would always blame the network. Even after we spent a year upgrading and removing all SPOFs. They’d blame the network……

          “Your application is somehow producing 2 billion packets per second and your SQL queries are returning 5GB of data”…. “See! The network is too slow and it has problems”

      • NickwithaC@lemmy.world
        link
        fedilink
        English
        arrow-up
        51
        ·
        14 hours ago

        I always view the source of websites like this and this is one of the worst I’ve seen. 217 lines of code (including inline Javascript?!) and a Google tag for some reason, all to put the word YES in green on black.

        • Xylight@lemdro.id
          link
          fedilink
          English
          arrow-up
          4
          ·
          49 minutes ago

          this made me mad so i made a single, ultra minimal html page in 5 minutes that you can just paste in your url box

          data:text/html;base64,PCFkb2N0eXBlaHRtbD48Ym9keSBzdHlsZT10ZXh0LWFsaWduOmNlbnRlcjtmb250LWZhbWlseTpzYW5zLXNlcmlmO2JhY2tncm91bmQ6IzAwMDtjb2xvcjojMmYyPjxoMT5JcyBpdCBETlM/PC9oMT48cCBzdHlsZT1mb250LXNpemU6MTJyZW0+WWVz
          

          source code:

          <!doctypehtml><body style=text-align:center;font-family:sans-serif;background:#000;color:#2f2><h1>Is it DNS?</h1><p style=font-size:12rem>Yes
          
        • ijhoo@lemmy.ml
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          13 hours ago

          Did not think of doing that.

          I guess i never expected anyone to have a fcking JavaScript on a simple page as that

  • falseWhite@lemmy.world
    link
    fedilink
    English
    arrow-up
    92
    arrow-down
    3
    ·
    15 hours ago

    That’s what you get when you let go hundreds of employees from your cloud computing unit in favour of AI.

    I hope they end up having to compensate all the billions of losses they caused to all the businesses and people.

      • falseWhite@lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        2
        ·
        edit-2
        15 hours ago

        They do have contracts and are obligated to provide a certain “up time”, which is usually 99% or so. If they fail to provide that, they are liable to compensate for the losses.

        Or do you think that Amazon is above the law and no other company could sue them?

        It all depends on what kind of contracts they have.

        • WASTECH@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          11 hours ago

          These contracts do not stipulate reimbursement for lost revenue. The “uptime guarantee” just gets you a partial discount or service refund for the impacted services.

          It is on the customer to architect their environment for high availability (use multiple regions or even multiple hyperscalers, depending on the uptime need).

          Source: I work at an enterprise that is bound by one of these agreements (although not with AWS).

          • CheezyWeezle@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            10 hours ago

            SLA contracts can have a plethora of stipulations, including fines and damages for missing SLO. It really depends on how big and important the customer is. For example, you can imagine government contracts probably include hefty fines for causing downtime or data loss, although I am not involved with or familiar with public sector/ government contracts or their terms.

            You can imagine that a customer that is big enough to contract a cloud provider to build new locations and install a bunch of new hardware just for them, would also be big enough to leverage contract terms that include fines and compensation for extended downtime or missing SLO.

            I work at a data center for a major cloud provider, also not AWS

        • Onomatopoeia@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          17
          ·
          edit-2
          14 hours ago

          Much of this stuff is automatic - I’ve worked with such contracted services where uptime is guaranteed. The contracts dictate the terms and conditions for refunds, we see them on a monthly basis when uptime is missed and it’s not done by a person.

          I imagine many companies have already seen refunds for outage time, and Amazon scrambled to stop the automation around this.

          They’ll have little to stand on in court for something this visible and extensive, and could easily lose their shirt with fines and penalties when a big company sues over breech when they choose to not renew.

          Just cause they’re big doesn’t mean all their clients are small or don’t have legal teams of their own.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 hours ago

          99% uptime in a year gives you 3.65 days of downtime, which I think would still be within SLA (assuming nothing else happened this year). Though, once you get to 1 9 reliability (99.9%), you’ve got a shift and change you can be down before you breach SLA.

          If their reliability metrics are monthly, 99% gets you less than a shift of down time, so they’d be out of SLA and could probably yell to get money back.

          • Phoenixz@lemmy.ca
            link
            fedilink
            English
            arrow-up
            7
            ·
            12 hours ago

            I worked at a datacenter that sold clients 99.99% uptime.

            Fun times with a maximum of about one hour of downtime per year for hundreds of servers

        • BCsven@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          ·
          14 hours ago

          Most services have a clause that they are not liable for unforseen issues… Depends how good the lawyers were when formalizing the contracts.

          • Passerby6497@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            12 hours ago

            Good luck arguing that a missed config counts as an ‘unforeseen issue’. If they go that route, people will be all over them for not being SOC compliant wrt change control.

            • BCsven@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              They can try to argue that latency issue and the stale state were an unknown / unanticipated problem. Like when half of Canadas Rogers network went down affecting most debit payment systems. Testing of routing showed it OK, realworld flip went haywire.

        • BakerBagel@midwest.social
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          14 hours ago

          Amazon has more money than most countries. They can outlast any company in court, or just ban you from their services in the future.

          • Onomatopoeia@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            7
            ·
            14 hours ago

            Depends on who we’re talking about. Companies like finance orgs are all about legal contracts and would be able to hold their feet to the fire.

            You don’t want to go to court against a finance company or any very large org where contract law is their bread and butter (basically any large/multinational corp).

            Amazon’s not hosting just small operations.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      11 hours ago

      Mistakes happen with or without AI

      The problem is that the current internet is structured in a way that creates high risk systems that can cause a massive outage. We went from having thousands of independent companies to a handful of massive ones. A mistake by a single company shouldn’t be able to black out half the internet.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      12 hours ago

      Was it proven that AI wa the cause?

      In not saying it wasn’t, just that if it really was, I’d like a source for that claim

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        No, but it clearly wasn’t the solution. They likely could have used some of those people they fired for that.

      • jaybone@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 hours ago

        There was an article in my lemmy all feed yesterday claiming so. But it was a super questionable shady site, which people were calling out.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      13 hours ago

      Silly peon rich people don’t suffer consequences.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    3
    ·
    13 hours ago

    This is purely anecdotal, but I have been running into a lot of DNS issues over the past couple months where I work. 3 of the computers and even one of the laptops for remote work were having DNS issues that needed to be fixed. One even needed Windows reinstalled after fixing the DNS issue (Which was probably unrelated, but worth mentioning)

    I’m honestly starting to think that the internet in general might be imploding. Not sure why, but replacing so many developers and programmers with AI might be responsible. Who knows, but it’s definitely very strange.

    • ubergeek@lemmy.today
      link
      fedilink
      English
      arrow-up
      24
      ·
      10 hours ago

      A huge problem are developers who lack a fundamental understanding of how the internet even works. I’ve had to explain how short, unqualified names resolve vs how fqdns resolve. Or why even you may not be able to reach another node in your proverbial cluster, because they are on different subnets. Or, why using GUIDs as hostnames is a generally bad idea, and will cause things to fail in unpredictable ways, especially with deeply nested subdomains.

      • GreenKnight23@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        9 hours ago

        I have worked with too many devs that didn’t even know what the 7 layers/OSI are or why they exist.

        they didn’t know what a network port was used for and why it’s important to not expose 3306 to the internet.

        they couldn’t understand that fragmentation of a message bus occurs when you don’t dedupe the contents.

        you know, morons.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      53
      ·
      13 hours ago

      The biggest issue is how centralized the internet has become. It went from a bunch of local servers to a handful of cloud providers.

      We need to spread things out again

      • amino@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        10
        ·
        12 hours ago

        Signal is definitely part of the fun internet, they just decided to rely on AWS due to techbro culture I assume?

        • dubyakay@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          They rely on AWS due to favourable contract in hosting it, and also proving the proof of concept that they can be hosted securely on a hostile provider, without the provider having any clues at all in what data is being sent between the parties.

          • amino@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 hours ago

            sure, proving to the audience that you can kick yourself in the nuts over and over while maintaining the privacy of your testicle’s innards is impressive from a biological standpoint but it still looks stupid to a normal person. I don’t hate signal, I will continue using it but this and their crypto scam makes me doubt some of their choices and how they’ll operate in the future

  • SayCyberOnceMore@feddit.uk
    link
    fedilink
    English
    arrow-up
    11
    ·
    14 hours ago

    I’m glad these things happen… it keeps everyone aware that cloud is fragile and Plan B should be considered for mission critical tasks.

    I’m also hoping that it will improve cloud resiliency because a complete / partial restart of cloud systems needs a whole different approach than maintaining a running system.

  • Flax@feddit.uk
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    15 hours ago

    Makes sense. DNS is quite a single point of failure

    • non_burglar@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      15 hours ago

      Its true.

      It comes up at work, it comes up in discussions on Linux podcasts I listen to, it comes up here…

      We have a big, dangerous impending problem in DNS.

      • Flax@feddit.uk
        link
        fedilink
        English
        arrow-up
        10
        ·
        14 hours ago

        The issue here isn’t DNS. The issue here is a large portion of the internet relying on a single data centre on the US East coast. Ideally, a lot of competing hosting companies would exist so if one goes down, it’s just one service and very few people notice.

        • Onomatopoeia@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          7
          ·
          14 hours ago

          So much this.

          Why is Signal hosted in one location on AWS, for example? That’s the sort of thing that should be in multiple places around the world with automatic fail over.

        • non_burglar@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          12 hours ago

          Yes, that’s true, I guess it’s a separate issue. But the way DNS currently runs is a problem waiting to happen.

  • pop [he/him]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 hours ago

    So, in the end they turned off the thing that caused this whole mess and everything is still working.

    What’s the point of having it, then?