Azure VMs ruined by CrowdStrike patchpocalypse? Microsoft has recovery tips

Azure VMs ruined by CrowdStrike patchpocalypse? Microsoft has recovery tips

Updated Did the CrowdStrike patchpocalypse knock your Azure VMs into a BSOD boot loop? If so, Microsoft has some tips to get them back online.

It’s believed that a bad channel file for CrowdStrike’s endpoint security solution Falcon caused its Sensor active detection agent to attack its host. That’s caused Windows machines around the world to become even less useful and wreaked havoc at airports, hospitals, emergency services and in countless unexpected places. 

It’s not believed that the CrowdStrike failure was related to the other Azure outage yesterday, so if you’re recovering from one hopefully you didn’t have to deal with the other. If your VMs were borked by Falcon, however, then read on.

Just keep booting

We’d tell you it’s a joke, but it’s not: Microsoft’s top piece of advice to fix your broken Azure VMs is to turn them off and on again – repeatedly. No, even more than that.

“We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines,” Microsoft said on its Azure status page as of writing. “Several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.” 

Microsoft says affected users can reboot their VMs in the Azure portal, or by using Azure CLI or Azure Shell. 

It’s always a great situation when mitigation starts with “reboot and pray.” 

Of course, that’s not going to help everyone, and from there the steps are largely similar to what’s been reported by other people, like CrowdStrike’s head of threat hunting, Brody Nisbet: You gotta do it manually.

First, if you have a backup from before 1900 UTC yesterday, just restore that. If your backup habits are lax, then you’re going to have to repair the OS disk offline, which will be more difficult for those with encrypted disks.

Once you’ve successfully attached a recovery disk, Microsoft says customers need to delete Windows/System/System32/Drivers/CrowdStrike/C00000291*.sys, the same recommendation Nisbet made for other affected users. 

Unfortunately, even that might not work, Nesbit said – here’s hoping your systems don’t fall into that category.

Rebooting has been recommended as a solution largely to give the machine a chance to try to contact CrowdStrike servers and retrieve the fix. Unfortunately, when you’re stuck in a boot loop this isn’t very feasible. For those unable to boot into Windows, be it on a VM or physical machine, the Internet Storm Center has recommended booting into safe mode with networking, and then following the steps to delete the offending file. ®

Updated at 1551 UTC on July 19, 2024, to add

CrowdStrike’s notice page for the outage has been updated to add more recovery options, as well as specific steps for AWS users and those whose Windows VMs are secured via Bitlocker.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : The Register – https://go.theregister.com/feed/www.theregister.com/2024/07/19/azure_vms_ruined_by_crowdstrike/

Exit mobile version