This isn’t a gloat post. In fact, I was completely oblivious to this massive outage until I tried to check my bank balance and it wouldn’t log in.

Apparently Visa Paywave, banks, some TV networks, EFTPOS, etc. have gone down. Flights have had to be cancelled as some airlines systems have also gone down. Gas stations and public transport systems inoperable. As well as numerous Windows systems and Microsoft services affected. (At least according to one of my local MSMs.)

Seems insane to me that one company’s messed up update could cause so much global disruption and so many systems gone down :/ This is exactly why centralisation of services and large corporations gobbling up smaller companies and becoming behemoth services is so dangerous.

  • 0x0@programming.dev
    link
    fedilink
    arrow-up
    90
    ·
    5 months ago

    decades of IT experience

    Do any changes - especially upgrades - on local test environments before applying them in production?

    The scary bit is what most in the industry already know: critical systems are held on with duct tape and maintained by juniors 'cos they’re the cheapest Big Money can find. And even if not, There’s no time. or It’s too expensive. are probably the most common answers a PowerPoint manager will give to a serious technical issue being raised.

    The Earth will keep turning.

    • goodgame@feddit.uk
      link
      fedilink
      arrow-up
      35
      arrow-down
      2
      ·
      5 months ago

      some years back I was the ‘Head’ of systems stuff at a national telco that provided the national telco infra. Part of my job was to manage the national systems upgrades. I had the stop/go decision to deploy, and indeed pushed the ‘enter’ button to do it. I was a complete PowerPoint Manager and had no clue what I was doing, it was total Accidental Empires, and I should not have been there. Luckily I got away with it for a few years. It was horrifically stressful and not the way to mitigate national risk. I feel for the CrowdStrike engineers. I wonder if the latest embargo on Russian oil sales is in anyway connected?

      • 0x0@programming.dev
        link
        fedilink
        arrow-up
        18
        ·
        5 months ago

        I wonder if the latest embargo on Russian oil sales is in anyway connected?

        Doubt it, but it’s ironic that this happens shortly after Kaspersky gets banned.

    • ik5pvx@lemmy.world
      link
      fedilink
      arrow-up
      30
      ·
      5 months ago

      Unfortunately falcon self updates. And it will not work properly if you don’t let it do it.

      Also add “customer has rejected the maintenance window” to your list.

      • marcos@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        5 months ago

        Well, “don’t have self-upgrading shit on your production environment” also applies.

        As in “if you brought something like this, there’s a problem with you”.

    • HumanPenguin@feddit.uk
      link
      fedilink
      English
      arrow-up
      25
      ·
      5 months ago

      Not OP. But that is how it used to be done. Issue is the attacks we have seen over the years. IE ransom attacks etc. Have made corps feel they needf to fixed and update instantly to avoid attacks. So they depend on the corp they pay for the software to test roll out.

      Autoupdate is a 2 edged sword. Without it, attackers etc will take advantage of delays. With it. Well today.

      • 0x0@programming.dev
        link
        fedilink
        arrow-up
        15
        ·
        edit-2
        5 months ago

        I’d wager most ransomware relies on old vulnerabilities. Yes, keep your software updated but you don’t need the latest and greatest delivered right to production without any kind of test first.

        • HumanPenguin@feddit.uk
          link
          fedilink
          English
          arrow-up
          13
          ·
          5 months ago

          Very much so. But the vulnerabilities do not tend to be discovered (by developers) until an attack happens. And auto updates are generally how the spread of attacks are limited.

          Open source can help slightly. Due to both good and bad actors unrelated to development seeing the code. So it is more common for alerts to hit before attacks. But far from a fix all.

          But generally, time between discovery and fix is a worry for big corps. So why auto updates have been accepted with less manual intervention than was common in the past.

          • SayCyberOnceMore@feddit.uk
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 months ago

            I would add that a lot of attacks are done after a fix has been released - ie compare the previous release with the patch and bingo - there’s the vulnerability.

            But agree, patching should happen regularly, just with a few days delay after the supplier release it.

      • Avatar_of_Self@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 months ago

        I get the sentiment but defense in depth is a methodology to live by in IT and auto updating via the Internet is not a good risk to take in general. For example, should Crowdstrike just disappear one day, your entire infrastructure shouldn’t be at enormous risk nor should critical services. Even if it’s your anti-virus, a virus or ransomware shouldn’t be able to easily propagate through the enterprise. If it did, then it is doubtful something like Crowdstrike is going to be able to update and suddenly reverse course. If it can then you’re just lucky that the ransomware that made it through didn’t do anything in defense of itself (disconnecting from the network, blocking CIDRs like Crowdsource’s update servers, blocking processes, whatever) and frankly you can still update those clients anyway from your own AV update server which is a product you’d be using if you aren’t allowing updates from the Internet in order to roll them out in dev first, phasing and/or schedules from your own infrastructure.

        Crowdstrike is just another lesson in that.