What's Documented vs. What's Assumed

The sovereign individual calibrates. That is perhaps the single most important discipline in navigating the surveillance landscape: the refusal to treat every rumored capability as confirmed fact, and the equal refusal to dismiss documented capabilities as paranoia. Overestimating threats wastes res

The sovereign individual calibrates. That is perhaps the single most important discipline in navigating the surveillance landscape: the refusal to treat every rumored capability as confirmed fact, and the equal refusal to dismiss documented capabilities as paranoia. Overestimating threats wastes resources — time, money, cognitive bandwidth, social capital. Underestimating them creates genuine vulnerability. The goal is neither comfort nor alarm. The goal is accuracy.

What follows is an inventory of surveillance capabilities, divided into what is documented by court filings, government disclosures, academic research, and investigative journalism — and what is widely assumed but lacks credible evidence. This inventory reflects the landscape as of early 2026. Surveillance capabilities change; the discipline of distinguishing documentation from assumption does not.

Documented: The NSA and Five Eyes Programs

Edward Snowden’s 2013 disclosures, published through The Guardian and The Washington Post, revealed the scope of the National Security Agency’s surveillance programs with a specificity that moved these capabilities from the realm of suspicion into the realm of documented fact. Snowden detailed these programs further in his 2019 memoir Permanent Record, and subsequent court rulings and congressional investigations have confirmed the essential accuracy of the disclosures.

The PRISM program, as documented in the Snowden disclosures and confirmed by the Director of National Intelligence’s subsequent acknowledgments, provided the NSA with direct access to data held by major technology companies — including Google, Microsoft, Apple, Facebook, and Yahoo — through legal orders issued under the Foreign Intelligence Surveillance Act. The upstream collection program, documented in the same disclosures, involved the NSA tapping fiber-optic cables to collect internet communications in transit. The metadata collection program, confirmed by both the Snowden documents and the subsequent ruling in ACLU v. Clapper (2015), involved the bulk collection of telephone metadata — who called whom, when, and for how long — from major carriers, covering essentially all domestic phone calls.

These are not theoretical capabilities. The NSA built them, deployed them, and operated them at scale. Federal courts have ruled on their legality. The USA FREEDOM Act of 2015 placed some constraints on the bulk metadata collection program, but the underlying technical capability and legal framework for targeted surveillance remain intact. The FISA Amendments Act, reauthorized multiple times — most recently with Section 702 reauthorization in 2024 — continues to authorize surveillance of non-US persons’ communications, which inevitably sweeps up communications involving US persons as well.

What remains less clear is the current operational scope of these programs. The Snowden disclosures are over a decade old. The NSA’s capabilities have presumably evolved; the specific contours of that evolution are classified. We know what was built. We can reasonably infer that the capabilities have expanded rather than contracted. But the specific programs operating today are not publicly documented in the same granular detail. Honest analysis acknowledges this gap.

Documented: Corporate Location Tracking

In 2018, an Associated Press investigation found that Google continued to collect and store location data on users who had explicitly turned off their “Location History” setting. The distinction Google drew — between “Location History” (a specific feature) and other forms of location data collection embedded in services like Google Maps and Google Search — was technically accurate and practically deceptive. The Arizona Attorney General’s subsequent lawsuit against Google, which resulted in a $93 million settlement in 2023, confirmed that Google’s location tracking practices went beyond what users understood or consented to.

This is documented corporate behavior, not speculation. Google’s own internal communications, surfaced during the Arizona litigation, showed that Google employees were aware that the location tracking settings were confusing to users and that the company continued to collect location data through multiple pathways even when users believed they had disabled it.

The practice is not unique to Google. Data brokers — companies like Babel Street, Venntel, and others identified in investigations by Senator Ron Wyden and the Electronic Frontier Foundation — purchase precise location data from smartphone apps and sell it to government agencies, private investigators, and corporate clients. The Wyden investigation documented cases in which commercially available location data was used to track individuals to specific buildings, including places of worship, medical facilities, and private residences. The data was available for purchase without a warrant, without judicial oversight, and without the knowledge of the individuals being tracked.

This is the surveillance capability that should concern most people more than the NSA. The NSA’s programs, however expansive, are at least nominally subject to legal process and congressional oversight. The commercial location data market operates with essentially no regulatory constraint in the United States, and the data is available to anyone willing to pay for it.

Documented: Shadow Profiles and Off-Platform Tracking

During Mark Zuckerberg’s 2018 congressional testimony, Facebook confirmed what researchers had long suspected: the company collects data on individuals who do not have Facebook accounts. Facebook builds what have been called “shadow profiles” — records of non-users assembled from contact lists uploaded by existing users, from Facebook pixels embedded on third-party websites, and from data purchased from brokers. If your friend uploads their phone contacts and your number is in their phone, Facebook has your phone number associated with a shadow profile — regardless of whether you have ever created an account.

Meta’s off-Facebook activity tracking, documented through the company’s own transparency tools (introduced under regulatory pressure), reveals the extent of cross-site behavioral data collection. Websites and apps that integrate Facebook’s pixel or SDK transmit behavioral data — page visits, purchases, searches — back to Meta, where the data is associated with your profile and used to refine the prediction products sold to advertisers. An analysis of the off-Facebook activity tool typically reveals data transmissions from hundreds of websites and apps, many of which the user would not have associated with Facebook.

This is not assumption. This is documented in Meta’s own disclosures, confirmed in congressional testimony, and visible to any user who checks the off-Facebook activity tool in their account settings.

Documented: Data Broker Aggregation

The data broker industry operates at a scale that is difficult to comprehend without specific numbers. Databrokers.com, a registry maintained by privacy researchers, lists over four thousand data brokers operating in the United States alone . These companies aggregate data from public records, purchase histories, location data, browsing data, and dozens of other sources to build detailed profiles on hundreds of millions of individuals.

The EFF has documented cases in which data broker profiles included home addresses, phone numbers, email addresses, estimated income, political affiliation, religious affiliation, health conditions, and purchasing habits — all available for purchase without the knowledge or consent of the individual profiled. The Wyden investigation documented that some data brokers sell data to US government agencies, effectively allowing federal agencies to purchase surveillance data that they would otherwise need a warrant to collect.

The profiles are not always accurate; data brokers frequently attribute incorrect information to individuals, and the lack of any correction mechanism means errors persist indefinitely. But the scope of the aggregation is documented, the commercial availability of the data is documented, and the use of commercially available data by government agencies as a warrant workaround is documented.

Assumed but Unproven: Your Phone Is Listening to Your Conversations

This is the single most widely believed surveillance claim that lacks credible evidence. The experience is familiar: you mention a product in conversation, and within hours you see an advertisement for it on your phone. The conclusion feels obvious — your phone’s microphone must be recording your conversations and transmitting them to advertisers.

Multiple investigations, including work by security researchers at Northeastern University, have tested this hypothesis by placing phones near sustained audio conversations about specific products and monitoring all network traffic leaving the device. The studies consistently found no evidence that phones transmit ambient audio to remote servers for advertising purposes. The researchers did find that some apps transmit screenshots and screen recordings — a different and documented privacy concern — but not ambient audio capture for ad targeting.

The more likely explanation, as documented by Zuboff and others, is that behavioral prediction models are simply very good. The advertising ecosystem knows your browsing history, your location, your purchase history, the browsing history of people in your household and social network, and thousands of other data points. When it predicts your interest in a product before you consciously recognize that interest yourself, it can feel like mind-reading. When the prediction coincides with a conversation, it can feel like eavesdropping. But the prediction models do not need audio data; the behavioral surplus they already have is sufficient.

This distinction matters for proportional response. If you believe your phone is listening, your response is to cover the microphone or leave the phone in another room — measures that address a threat that does not appear to exist. If you understand that the actual mechanism is behavioral prediction from existing data streams, your response is to reduce the data streams — a more effective and sustainable strategy.

Assumed but Unproven: Incognito Mode Provides Meaningful Privacy

Most users understand that browser incognito or private browsing mode provides some form of privacy. The assumption is often broader than the reality. Incognito mode prevents the browser from saving your browsing history, cookies, and form data locally. It does not prevent your internet service provider from seeing which sites you visit. It does not prevent the websites you visit from logging your IP address. It does not prevent your employer from monitoring your traffic if you are on a corporate network.

Google itself settled a $5 billion class-action lawsuit in 2024 over allegations that the company tracked users’ browsing activity even in Chrome’s Incognito mode, collecting data through Google Analytics, Google Ad Manager, and other tools embedded on third-party websites. The settlement terms require Google to delete billions of records of browsing data collected from Incognito mode users .

The extent to which your ISP actually uses your browsing data varies by jurisdiction and by carrier. In the United States, the FCC’s 2017 repeal of broadband privacy rules removed the requirement for ISPs to obtain explicit consent before selling customers’ browsing data. Whether your specific ISP actually does this depends on the carrier’s privacy policy, which is subject to change. The capability exists. The extent of its exercise is variable and often opaque.

The Calibration Principle

The discipline we are practicing here — distinguishing the documented from the assumed — is not an academic exercise. It is the foundation of proportional response. Every hour you spend defending against an assumed threat is an hour not spent defending against a documented one. Every dollar you spend on a Faraday bag for your phone is a dollar not spent migrating your email to a provider whose business model does not depend on reading it.

The documented threats are serious enough to warrant deliberate action. The commercial location data market, the behavioral surplus extraction economy, the data broker aggregation industry, the government’s use of commercially available data as a warrant workaround — these are proven, documented, and ongoing. You do not need to invent additional threats to justify building proportional defenses.

The sovereign individual builds those defenses on evidence, not on anxiety. The evidence, as it turns out, is more than sufficient.

A Note on Timeframes

Surveillance capabilities as documented in this article reflect the landscape as of early 2026. The Snowden disclosures describe programs built over a decade ago; the NSA’s current capabilities are presumably more extensive. Corporate data practices change with every product update, every regulatory action, and every quarterly earnings call. Data broker registries expand and contract. Court settlements redefine what is legal and what is merely tolerated.

The specific facts in this article will age. The practice of distinguishing documented capability from assumed capability will not. Revisit the evidence periodically. Adjust your response proportionally. That is the discipline; the inventory is just the current input.


This article is part of the Surveillance Capitalism & The Proportional Response series at SovereignCML.

Related reading: What Shoshana Zuboff Actually Said (And What She Didn’t), The Business Model Is the Problem (Not the Technology), The Enforcement Gap: Laws That Exist but Don’t Protect You

Read more