It's being blamed on a bad update implemented by the vendor CrowdStrike. From my experiences CrowdStrike has been a good gatekeeper between their clients and enemy actors who attempt to invade, disrupt, and take down Crowdstrike clients' systems. This makes it just that more sad and unfortunate that they were the ones who allegedly inadvertently caused the worldwide outage due to a single bad software update.
I just got my master's in cybersecurity about a year ago but one of the things that I keep seeing in some of my environments is alleged professionals who for some reason think that it's acceptable to develop code and deploy it without first testing it.
Additionally, there are the companies that don't have a business continuity plan, change control or disaster recovery policies so that if the worse/unthinkable should occur, and an update breaks something or creates a problem, they can roll back their changes to a stable environment before the bad update installation. Many also don't appear to have any redundancies in place and/or backups that can at least put them back in operation even if they lose a few hours of data (or whatever their predetermined acceptable loss threshold is).
I was just telling my manager earlier this week that we need some redundancies put in place, I've had just submitted tickets for hardware and software that will allow us to do just that this past Monday. I was also adamant that our solution not involve Azure or the cloud.