Skip to main content
  1. Blog/

Apple and Google's Exposure Notification API — Privacy Engineering at Scale

·1056 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Table of Contents
Cybersecurity Landscape - This article is part of a series.
Part : This Article

In any other year, Apple and Google collaborating on a shared API would be the biggest tech story of the decade. In 2020, it’s just another Wednesday. The two companies have jointly developed an Exposure Notification system — originally called “Contact Tracing” before a deliberate rename — that uses Bluetooth Low Energy to help public health authorities track potential COVID-19 exposure. The technical architecture they’ve chosen is genuinely interesting, and it has implications well beyond pandemic response.

How It Actually Works
#

The system operates in two phases. Phase one, available now as an API for public health authority apps, works like this: your phone periodically broadcasts a rotating Bluetooth identifier — a random key derived from a daily Temporary Exposure Key (TEK). Other phones in proximity record these identifiers along with signal strength and duration.

If you test positive for COVID-19, you can choose to upload your TEKs to a central server. Every phone periodically downloads the published TEKs and checks them against its local log of observed identifiers. If there’s a match that meets certain risk parameters (close enough, long enough), the user gets a notification.

The critical design decision is what stays local and what goes to the server. Your phone’s Bluetooth observations — where you’ve been, who you’ve been near — never leave the device. The only data uploaded are the TEKs of confirmed positive cases, and these are effectively random numbers that reveal nothing about the person’s identity or location. The matching happens entirely on-device.

This is a fundamentally different architecture from the centralized approaches being pushed by some governments. France’s StopCovid app, for instance, uses a centralized model where all contact events are uploaded to a government server. The UK initially went centralized too. The decentralized approach that Apple and Google have chosen — often called the DP-3T model, after the academic protocol it’s based on — keeps the sensitive data distributed.

The Bluetooth Problem
#

As someone who’s spent considerable time working with IoT devices and Bluetooth protocols, I have a healthy skepticism about BLE-based proximity detection. Bluetooth signal strength (RSSI) is a notoriously unreliable proxy for physical distance. Walls, pockets, bags, body orientation, phone model, case material — all of these affect signal propagation in ways that make precise distance estimation essentially impossible.

Apple and Google are using a combination of signal attenuation, duration thresholds, and configurable risk scoring to try to separate meaningful contacts from noise. The API exposes parameters that public health authorities can tune: minimum duration, signal strength thresholds, risk weighting based on days since exposure. But fundamentally, you’re trying to answer an epidemiological question with a physical-layer signal that wasn’t designed for it.

The counter-argument is that perfect accuracy isn’t required. If the system catches 60-70% of genuine close contacts with an acceptable false positive rate, it’s still more effective than relying on human memory alone. Traditional contact tracing asks “who were you near in the last two weeks?” — a question most people can’t answer accurately even under ideal conditions. An automated system with imperfect accuracy may well outperform an interview-based system with imperfect recall.

The Privacy Architecture
#

What impresses me most about this project is how thoroughly privacy has been baked into the architecture. This isn’t privacy as an afterthought or a policy promise — it’s privacy as a technical constraint.

The Temporary Exposure Keys rotate daily. The Bluetooth identifiers derived from them rotate every 10-20 minutes. There’s no persistent identifier that could be used to track a device across time. The server never learns who was exposed — only the device knows, and the notification happens locally. Even the TEKs uploaded by positive cases are stripped of metadata; the server doesn’t know which TEKs belong to the same person across days.

Apple and Google have also made explicit commitments about the lifecycle: the system will be disabled region by region when it’s no longer needed, and the Bluetooth broadcasting can be turned off by the user at any time. Whether you trust those commitments is a separate question, but the technical architecture genuinely limits what any party — including Apple and Google themselves — can extract from the system.

This matters because the failure mode of privacy-invasive contact tracing is severe. If people don’t trust the system, they won’t install the app, and a contact tracing app with 10% adoption is approximately useless. The privacy-preserving design isn’t just ethically right — it’s pragmatically necessary.

The Platform Power Question
#

There’s a less comfortable aspect to this story. Apple and Google control the two mobile operating systems that cover effectively 100% of the smartphone market. By building this at the OS level, they’ve made a unilateral decision about how contact tracing should work on mobile devices. Governments that wanted centralized approaches are now facing the reality that their apps will work poorly without OS-level access to Bluetooth — access that Apple in particular has historically restricted for battery and privacy reasons.

This is an extraordinary exercise of platform power. It’s being used in this case for a purpose most people would consider legitimate, and the privacy-preserving design is arguably better than what most governments would have built on their own. But it sets a precedent. Two companies have effectively overridden national public health technology strategies because they control the platforms.

My Take
#

I’m cautiously optimistic about the Exposure Notification API. The privacy architecture is sound — I’ve read the cryptographic specification, and it’s well-designed. The Bluetooth accuracy concerns are real but probably acceptable for a supplementary tool. And the decentralized approach is clearly the right call from both an ethical and adoption standpoint.

What concerns me is the adoption question. For this to work, a substantial percentage of the population needs to install and use a compatible app. The studies I’ve seen suggest you need 60%+ adoption for meaningful impact, though lower adoption rates can still provide some benefit. In my experience with IoT deployments, getting people to consistently use Bluetooth-based features is harder than it sounds — between battery concerns, Bluetooth confusion, and general app fatigue.

We’re in uncharted territory — two platform rivals collaborating on critical public health infrastructure under immense time pressure. As engineers, all we can do is evaluate the architecture on its merits, push for transparency, and hope the implementation lives up to the specification.

Cybersecurity Landscape - This article is part of a series.
Part : This Article

Related