This week’s Patch Tuesday brought something genuinely alarming from the Azure world. Researchers at Wiz disclosed OMIGOD — a set of four critical vulnerabilities in the Open Management Infrastructure (OMI) agent, a piece of software that Microsoft silently installs on Linux VMs in Azure when you enable certain services. The worst of these, CVE-2021-38647, allows unauthenticated remote code execution as root. Yes, root. With a CVSS score of 9.8.
And here’s the part that makes my blood pressure spike: most Azure customers running Linux VMs had no idea this software was installed on their machines.
What Is OMI, and Why Is It on My VM?#
OMI (Open Management Infrastructure) is an open-source project maintained by Microsoft. It’s a CIM (Common Information Model) management agent — essentially, a lightweight daemon that enables remote management and monitoring of Linux systems. Think of it as the Linux equivalent of WMI (Windows Management Instrumentation).
When you enable certain Azure services on a Linux VM — including Azure Automation, Azure Log Analytics, Azure Configuration Management, Azure Diagnostics, and others — Azure automatically deploys the OMI agent onto your VM. It runs as root, listens on port 5986 (HTTPS) or 5985 (HTTP), and accepts management commands via the WSMAN protocol.
The critical vulnerability is breathtakingly simple. When OMI receives a management request, it checks for an authentication header. If the authentication header is entirely absent — not invalid, but simply missing — the request is processed as root. Remove the auth header, get root access. It’s the kind of vulnerability that makes you wonder how it survived any security review at all.
The Scope of the Problem#
According to Wiz’s research, OMI is deployed on more than 65% of Azure Linux VMs. That’s potentially millions of machines. And because the agent listens on network ports, any Linux VM with OMI installed and the management ports exposed — either to the internet or to other VMs in the virtual network — is vulnerable.
The attack surface breaks down into two scenarios:
Internet-facing: If ports 5985/5986 are open to the internet (which they shouldn’t be, but misconfigurations happen), an attacker can gain root access from anywhere on the internet. Shodan queries are already showing thousands of exposed instances.
Internal network: Even if the management ports aren’t internet-facing, any attacker with access to your Azure virtual network can pivot through OMI to gain root on adjacent VMs. This makes OMIGOD an excellent privilege escalation and lateral movement tool for attackers who already have a foothold.
The Trust Problem#
What troubles me most about OMIGOD isn’t the vulnerability itself — software has bugs, and even embarrassing authentication bypasses happen. What troubles me is the trust model.
When I deploy a Linux VM in Azure, I expect to control what’s running on it. I choose the OS image, I install my packages, I configure my services. That’s the fundamental promise of IaaS: you get a virtual machine, and you control the software stack.
But Azure silently installs management agents without explicit consent. The OMI deployment happens as a side effect of enabling other services. There’s no dialog box saying “This will install a root-level management daemon on your VM that listens on network ports.” You enable Log Analytics, and OMI appears.
This pattern isn’t unique to Azure. AWS has the SSM Agent. GCP has the guest agent. All cloud providers install management software on VMs. But the OMIGOD disclosure highlights the risk: you’re running software you didn’t choose, didn’t audit, and might not even know about, and it can have critical vulnerabilities.
The Patching Gap#
Here’s where it gets worse. Microsoft released patches for the OMI vulnerabilities as part of Patch Tuesday on September 14. But — and this is critical — simply running Windows Update or applying Azure platform patches does NOT update the OMI agent on your Linux VMs.
For most affected Azure services, Microsoft needs to push an updated agent version, and this process is… not instantaneous. Some services auto-update OMI, but others require manual intervention. The Wiz team documented the patching matrix, and it’s confusing — different Azure services have different update mechanisms for OMI.
So we have a situation where:
- Microsoft silently installed vulnerable software on customer VMs
- Microsoft patched the vulnerability in OMI
- Microsoft cannot automatically patch many of the affected VMs
- Customers who didn’t know OMI was installed don’t know they need to patch it
This is a patching nightmare.
Immediate Actions#
If you’re running Linux VMs on Azure, here’s what to do right now:
Check for OMI: SSH into your VMs and check:
dpkg -l omi # Debian/Ubuntu
rpm -qa omi # RHEL/CentOSIf OMI is installed, check the version. Anything below 1.6.8-1 is vulnerable.
Block the ports: Ensure ports 5985 and 5986 are not accessible from the internet. Check your Network Security Groups (NSGs) immediately. Even for internal traffic, restrict access to these ports to only the management subnets that need them.
Update manually if needed:
wget https://github.com/microsoft/omi/releases/download/v1.6.8-1/omi-1.6.8-1.ssl_110.ulinux.x64.deb
sudo dpkg -i ./omi-1.6.8-1.ssl_110.ulinux.x64.debCheck for compromise: Review OMI logs, look for unexpected processes running as root, check for newly created user accounts or SSH keys. If ports 5985/5986 were exposed to the internet, assume breach until you can prove otherwise.
Audit your Azure service dependencies: Understand which Azure services you’ve enabled that might have triggered OMI installation. Consider whether you actually need those services.
My Take#
I’ve been working with cloud infrastructure since the early AWS days, and the implicit trust we place in cloud providers has always made me uncomfortable. We assume that the platform layer is secure, that management agents are benign, and that automatic deployments are in our interest. Most of the time, that trust is warranted. But when it fails, it fails catastrophically.
OMIGOD is a wake-up call for cloud security posture management. You cannot treat IaaS VMs as if you fully control the software stack. You need to:
- Know your actual attack surface, including provider-installed agents
- Enforce network segmentation by default — management ports should never be broadly accessible
- Monitor for unexpected listening services as part of your security baseline
- Have an incident response plan that accounts for provider-side vulnerabilities
The cloud shared responsibility model says the provider secures the platform and you secure your workload. But when the provider installs software in your workload without your knowledge, the responsibility boundary gets blurry. That ambiguity needs to be resolved — with better transparency from cloud providers about what they’re deploying, and better tooling for customers to audit their actual VM contents.
In the meantime, go check your Azure Linux VMs. Today.
