Skip to main content
  1. Blog/

Pegasus Spyware — Zero-Click Exploits and What They Mean for Software Security

·771 words·4 mins
Osmond van Hemert
Author
Osmond van Hemert
Breaches & Zero-Days - This article is part of a series.
Part : This Article

This week, a consortium of journalists published the Pegasus Project, exposing how NSO Group’s Pegasus spyware has been deployed against journalists, activists, and political figures worldwide. The technical details emerging from Amnesty International’s forensic methodology report are staggering — and as someone who’s spent decades thinking about software security, I find the implications deeply troubling.

What Makes Pegasus Different
#

Pegasus isn’t your average piece of malware. It leverages zero-click exploits, meaning the target doesn’t need to open a link, download a file, or take any action at all. A specially crafted iMessage, for instance, can compromise an iPhone without the recipient ever interacting with it.

The attack chain reportedly exploits vulnerabilities in Apple’s iMessage processing pipeline — specifically how the system handles certain file formats before they’re even rendered to the user. This is a fundamentally different threat model from traditional phishing attacks. You can’t train users to avoid something they never see.

From Amnesty International’s technical forensic methodology, the spyware can extract messages, emails, photos, record calls, and silently activate microphones and cameras. It operates across both iOS and Android with platform-specific exploit chains.

The Zero-Click Problem
#

What keeps me up at night about zero-click exploits isn’t just Pegasus — it’s the architectural pattern they expose. Modern messaging apps perform complex parsing of rich media formats before any user interaction. This creates an enormous attack surface that exists purely by design.

Consider the chain: a message arrives, the OS processes it, the app parses the content type, renderers decode the payload — all before a single pixel appears on screen. Each step involves complex C/C++ code processing untrusted input. For an attacker with enough resources, this is a goldmine.

Apple’s BlastDoor sandbox, introduced in iOS 14, was supposed to mitigate exactly this class of attack by isolating message parsing. The fact that Pegasus apparently found ways around it tells you something about the difficulty of securing these pipelines. You can add layers of sandboxing, but sufficiently motivated attackers with nation-state budgets will find seams.

Supply Chain Trust in a Post-Pegasus World
#

The broader question for our industry is about trust in the software supply chain. NSO Group sells Pegasus exclusively to governments, ostensibly for counter-terrorism and law enforcement. But the leaked list of 50,000+ phone numbers suggests the tool is being used far beyond those narrow justifications.

This creates an uncomfortable dynamic for software vendors. Every vulnerability you ship isn’t just a bug — it’s potential ammunition for surveillance vendors. The market for zero-day exploits is thriving, and companies like NSO Group, Candiru, and others are willing to pay millions for reliable exploit chains.

For those of us building software, this reinforces something I’ve been saying for years: security isn’t a feature you bolt on. It’s an architectural property. Memory-safe languages, minimal attack surfaces, principle of least privilege — these aren’t academic niceties. They’re defences against adversaries with essentially unlimited budgets.

What Developers Should Take Away
#

If you’re building applications that process untrusted input — and nearly all of us are — the Pegasus revelations should sharpen your thinking:

  1. Reduce parser complexity: Every format you support is attack surface. Do you really need to render that obscure image format client-side?

  2. Sandbox aggressively: Process untrusted data in isolated contexts with minimal permissions. Even if parsing is exploited, limit what an attacker can reach.

  3. Memory safety matters: A significant portion of zero-click exploits target memory corruption bugs. Languages like Rust eliminate entire classes of these vulnerabilities. The argument for memory-safe languages in security-critical code just got stronger.

  4. Assume compromise: Design systems where a single compromised component can’t exfiltrate everything. Compartmentalise data access.

My Take
#

I’ve been in this industry long enough to remember when “nation-state adversary” was considered an unrealistic threat model for most software. Pegasus demonstrates that the tools of nation-state surveillance have been productised and sold to dozens of governments worldwide. The adversary isn’t hypothetical anymore.

What frustrates me most is the asymmetry. NSO Group reportedly has hundreds of engineers working on exploitation. Most development teams I’ve worked with have maybe one part-time person thinking about security. The economics are wildly skewed in the attacker’s favour.

The silver lining, if there is one, is that this kind of exposure tends to accelerate defensive investment. Apple will undoubtedly harden iMessage further. Google will tighten Android’s messaging stack. But the fundamental tension remains: we build complex systems that process untrusted input, and sophisticated attackers will always find the cracks.

For those of us shipping software every day, the lesson is clear — every line of parsing code you write is a potential entry point. Treat it accordingly.

Breaches & Zero-Days - This article is part of a series.
Part : This Article