Two weeks ago, I wrote about Microsoft Build 2024 and flagged Windows Recall as a feature that raised serious security concerns. This week, those concerns were validated in dramatic fashion. Security researcher Kevin Beaumont published findings showing that Recall’s database — which stores screenshots of everything you do on your PC — was stored in plain text, accessible to any application running on the machine. Microsoft has now delayed the feature, pulling it from the upcoming Copilot+ PC launch and moving it to a Windows Insider preview instead.
This is a rare case where the security community’s pushback actually changed a major product launch timeline. And there are important lessons here for all of us building software.
What Went Wrong#
The core issue was almost embarrassingly basic. Recall continuously captured screenshots, ran them through OCR and semantic analysis on the NPU, and stored the results in a local SQLite database. That database was supposed to be protected, but Beaumont demonstrated that it was stored in the user’s AppData folder in plain text. Any process running under the user’s context — including malware — could simply read the entire database without elevated privileges.
Let that sink in: a feature designed to create a complete, searchable record of everything you’ve ever viewed on your computer stored that data in a way that any malicious application could trivially exfiltrate. We’re talking about screenshots of banking sessions, password managers, private messages, medical records, confidential documents — all indexed and searchable, sitting in an unencrypted SQLite file.
Security researcher Alex Hagenah went a step further and released TotalRecall, a proof-of-concept tool that could extract and display the Recall database contents. The tool worked exactly as you’d expect — because there was essentially no security barrier to overcome.
The Response#
To Microsoft’s credit, the response was relatively swift. On June 7, they announced Recall would be pulled from the Copilot+ PC launch scheduled for June 18 and moved to the Windows Insider Program. They committed to several changes: Recall would be opt-in rather than opt-on by default, Windows Hello biometric authentication would be required to access the timeline, and the database would be encrypted with keys tied to the device’s TPM.
These are all improvements that should have been in the original design. The fact that they weren’t suggests that the feature was rushed through development without adequate security review — likely driven by competitive pressure to show AI capabilities at Build.
Pavan Davuluri, Microsoft’s head of Windows, framed the delay as wanting to ensure a “trusted experience.” That’s the right instinct, but it raises the question: why wasn’t this the starting point?
The Deeper Problem: AI Feature Velocity vs. Security#
This incident illustrates a tension that I think will define the next few years of software development. Companies are under enormous pressure to ship AI features quickly. The competitive landscape is moving at a pace I haven’t seen since the early days of the web. But AI features often handle sensitive data in new ways — and the security implications of those new data flows aren’t always obvious during the feature design phase.
Recall is a perfect case study. The feature concept makes sense: use AI to create a searchable memory of your computing activity. But the implementation requires creating what is essentially the most sensitive data store on any consumer device. That store needs to be treated with the same level of security architecture as a credential manager or disk encryption system — not as a regular application database.
I’ve been involved in security architecture reviews for decades, and the pattern is familiar. A product team builds an exciting feature, security review happens too late in the cycle (or not thoroughly enough), and the result ships with fundamental design flaws. The difference now is that AI features tend to aggregate and process data in ways that amplify the impact of any security failure.
Lessons for Developers#
If you’re building applications that integrate AI features — and increasingly, most of us are — this incident offers concrete lessons:
Threat model your data stores early. Before you write a line of code, ask: what’s the worst thing that happens if this data is fully compromised? If the answer is “catastrophic,” design your security architecture first.
Default to encrypted, not plaintext. In 2024, there’s no excuse for storing sensitive data in unencrypted SQLite databases. Use platform encryption APIs, tie keys to hardware security modules (TPM, Secure Enclave), and require authentication for access.
Opt-in for sensitive features. Recall was originally going to be enabled by default. For any feature that captures, stores, or processes sensitive user data in new ways, the ethical and practical default is opt-in with clear disclosure.
Assume hostile local processes. If your application stores valuable data, assume that malware running under the same user context will try to read it. Design your access controls accordingly. Sandboxing, separate process isolation, and hardware-backed encryption are your friends.
Security review before launch announcements. The awkwardness of delaying a feature after a major keynote is nothing compared to the reputational damage of shipping a security vulnerability. Build security review into your launch timeline, not after it.
My Take#
I’m actually somewhat optimistic about this outcome. Yes, the original Recall implementation was a security failure. But the fact that Microsoft responded to community feedback and delayed the launch — rather than shipping and patching later — suggests that the feedback mechanisms are working. This is how the industry should function: researchers identify problems, companies listen, and products improve before reaching consumers.
The broader lesson is that the AI gold rush doesn’t exempt anyone from security fundamentals. If anything, the novel data patterns created by AI features demand more rigorous security architecture, not less. Every developer building AI-powered features should be looking at the Recall incident as a case study in what not to do — and more importantly, as a reminder that security can’t be bolted on after the exciting demo is built.
Microsoft will eventually ship Recall, and hopefully with the security architecture it should have had from day one. In the meantime, the rest of us have a useful reminder: no matter how impressive the AI capability, the fundamentals of data protection still apply.
