Microsoft Recall, One Year On: Still Broken, Still Shipping

One year ago this month, Microsoft shipped Windows Recall into the wild — the AI feature that photographs your screen every few seconds so you can 'find anything you've ever done' on your Copilot+ PC. The relaunch was supposed to close the chapter with encrypted vaults, Windows Hello and VBS enclaves. Twelve months on, a fresh proof-of-concept exploit has bypassed its most-hyped defence, Microsoft has classified the issue as 'not a vulnerability,' and fewer than 10% of Windows 11 PCs can even run the feature. This is what the first year of always-on AI memory actually looked like — and why marketers building AI workflows should take the lesson seriously.
Discover the risks and opportunities of Microsoft's Recall AI amidst privacy concerns.

One Year Since Recall Launched (April 2025 → April 2026)

Microsoft's April 2025 opt-in relaunch on Copilot+ PCs was supposed to close the chapter. The feature that had been described as “a disaster” by cybersecurity researchers in 2024 came back with a redesigned security stack: encrypted snapshots stored in a secure data vault, mandatory Windows Hello biometric authentication, and isolation inside a VBS enclave that Microsoft argued would make the screenshots effectively unreachable to malware.

GeekWire's one-year retrospective in April 2026 was polite but damning: Recall “still raises security red flags.” University of Pennsylvania's Office of Information Security went further, telling staff the feature poses “substantial and unacceptable security, legality and privacy challenges” and advising it not be used on university-managed devices. When an Ivy League CISO office publishes a blanket warning about a flagship consumer AI feature, you have a trust problem, not a PR problem.

The Latest Vulnerability: Hagenah's Windows Hello Exploit

Earlier this month, researcher Alexander Hagenah published a proof-of-concept showing that malware with sufficient local privileges can exploit the Windows Hello authentication layer to reach Recall's snapshot vault. In other words: the biometric lock Microsoft positioned as Recall's crown-jewel defence can be walked through by an attacker who has already compromised the user's session.

Microsoft's defenders will correctly point out that “attacker already has local privilege” is not a zero-click remote exploit. But that misses the threat model Recall creates. A device that photographs its own screen every few seconds is a device where any eventual local compromise — a malicious browser extension, a dodgy installer, a phished credential — becomes a time-travel key to every password, message and document the user has seen in months. The biometric lock Microsoft positioned as Recall's crown-jewel defence is load-bearing in a way the threat model does not support, and the blast radius of an ordinary infection is not ordinary any more.

Microsoft's Defence: “Not a Vulnerability”

On 3 April 2026, Microsoft formally classified Hagenah's finding as “not a vulnerability,” arguing that safeguards like login timeouts and anti-hammering rate limits mean the exploit cannot be abused at scale. This is the line coverage quickly settled on as “working as intended” — which is exactly the phrase that keeps tripping the feature up.

“Working as intended” is a fine answer when the intent is universally trusted. It is a terrible answer when a meaningful share of your security community believes the intent itself — continuous silent screen capture of everything a user does — is the problem. Microsoft has spent a year answering the second-order question (is the vault secure enough?) while refusing to re-litigate the first-order one (should the vault exist?). Researchers have not moved on, and neither have enterprise buyers.

The <10% Adoption Problem

Here is the number that should worry Redmond more than any exploit: fewer than 10% of Windows 11 PCs can currently enable and run Recall. It is gated to Copilot+ hardware — NPUs, specific Snapdragon/Intel/AMD silicon generations, minimum RAM thresholds — which means the feature that Microsoft has burned a year of reputation capital on is invisible to more than nine out of ten of its own users.

Reporting in January 2026 suggested Microsoft is quietly rethinking the Copilot+Recall push internally. Tech Brew's April piece went harder, asking bluntly whether “it's time to recall Windows 11” — a headline that would have been unthinkable in 2022. An AI feature nobody can run, that the people who can run it are being told by their security teams not to enable, is not a product. It is a cautionary case study.

OpenAI Chronicle: Will It Face the Same Scrutiny?

The reason Recall still matters in April 2026 is not Recall itself. It is that every major AI vendor is now building some flavour of always-on personal memory — and each one is walking straight into the arguments Microsoft has spent a year losing.

OpenAI Chronicle, covered by The Letter Two on 21 April 2026, “looks a lot like Microsoft's Recall” — a persistent, queryable memory layer that ingests what you do to help you find it later. The write-up's open question is whether Chronicle will face the same scrutiny. The honest answer is: yes, and probably faster. The BBC and EBU studies published this spring on AI assistants' flawed news delivery have already primed regulators and journalists to treat confident AI memory claims with suspicion. The next Recall-style controversy will not need to be explained from scratch.

The Pattern: Always-On AI Features vs User Trust

The Recall story is one instance of a pattern every product team building with AI is now walking into:

  1. Ship an always-on capability that feels magical in demos.
  2. Treat security as an add-on review rather than the product's foundation.
  3. Discover that users — especially enterprise and regulated-industry users — need auditability and control, not reassurance.
  4. Spend the next two years retrofitting trust into a product whose premise was “just trust us.”

The companies that avoid this are the ones that build the audit trail, the user-level off-switches, and the per-surface data controls into the v1. Not because regulators demand it, but because the customers they want demand it.

What This Means for Marketing Teams

If you are a marketer, you are now building AI into your workflow whether or not you meant to. Your content tools write drafts. Your analytics tools summarise performance. Your CRM suggests next actions. Your campaign planner drafts briefs. Each of those systems is, in a small way, a Recall — quietly remembering what you did, what your team did, what your customers did, and making decisions on top of it.

The lesson of Microsoft's year is not “don't use AI.” It is that always-on AI that can't show its working is a liability wearing a productivity costume. If your marketing stack can't answer “what data did you use, who touched it, what changed, and how do I turn any piece of it off,” you are one leaked screenshot from a trust incident of your own. The winning AI marketing stack in 2026 — the kind Anjin is built around — is the one where every action is inspectable, every input is scoped, and every output is attributable, by design, not by audit.

Anjin: The Marketing Operating System Built for Trust

This is the brief we wrote Anjin against.

Anjin is the Marketing Operating System — a single platform that runs your content, distribution, SEO, campaign planning and performance reporting end-to-end, powered by agents that understand your brand. It is deliberately the opposite of Recall: not an always-on recorder bolted onto a laptop, but an intentional, auditable workspace where every AI action is logged, every input is explicit, every output is attributable, and every agent runs inside guardrails you set.

What that looks like in practice:

  • Every piece of content names the brief, the inputs and the agent that produced it.
  • Every distribution decision is reviewable and revertible.
  • Every data source is opt-in, scoped to the campaign, and revocable by the user.
  • Every teammate sees what the system did on their behalf — no silent screenshots, no hidden context.

Marketers don't need a magic memory that watches them work. They need an operating system that does the work and shows the receipts. That's the category we're building.

The £888 Lifetime License — Offer Closing Soon

Lifetime access to Anjin for a one-time payment of £888. Not a subscription. Not a seat. Not a trial. One payment, unlimited use, for as long as Anjin exists.

The average marketing team spends £888 in about three working days on tooling, freelancers and coordination software. You're buying the platform that replaces most of it — once.

This price will not be offered again once we close our early-access cohort.

Claim your £888 Anjin lifetime license →

Founders, agency owners and in-house marketers — this is how you run marketing at AI speed without the team, the burn, or another year of waiting.

Sources: GeekWire, TweakTown, The Outpost, Tech Brew, The Letter Two, NewsBytes, TechTarget, Microsoft Support

Continue reading