What They Can See: The Actual Surveillance Landscape

The enforcement apparatus is real. We gain nothing by pretending it is not. But the conversation about surveillance in sovereignty-minded circles tends to oscillate between two equally useless poles: the conspiracist who believes every keystroke is monitored by a shadowy omniscient state, and the na

The enforcement apparatus is real. We gain nothing by pretending it is not. But the conversation about surveillance in sovereignty-minded circles tends to oscillate between two equally useless poles: the conspiracist who believes every keystroke is monitored by a shadowy omniscient state, and the naive optimist who assumes that because they have nothing to hide, the infrastructure of observation is irrelevant to their life. Neither position survives contact with the evidence. What Edward Snowden documented in Permanent Record, what Shoshana Zuboff mapped in The Age of Surveillance Capitalism, and what publicly available government reports confirm is a landscape that is simultaneously more extensive than most people realize and less actionable than most privacy advocates fear.

This article is the foundation of the Enforcement Gap series. Before we can talk about what agencies do with information, we need to establish what information exists, who holds it, and under what legal framework it flows. The answer is specific, documented, and worth understanding with precision rather than panic.

The Financial Layer: What the IRS, FinCEN, and Banks Actually See

The financial surveillance infrastructure in the United States is built on a reporting architecture, not a monitoring architecture. The distinction matters. Banks and financial institutions are required by law to generate specific reports under specific conditions, and those reports flow to specific agencies. This is not passive surveillance — it is structured data collection triggered by defined thresholds.

Currency Transaction Reports are filed by banks for any cash transaction exceeding $10,000. This is not discretionary. It is automatic. If you deposit $12,000 in cash at a bank, a CTR is generated and sent to the Financial Crimes Enforcement Network, a bureau of the U.S. Treasury Department. The report includes your identity, the amount, and the nature of the transaction. FinCEN receives millions of these reports annually. The filing of a CTR does not indicate suspicion. It is a mechanical process.

Suspicious Activity Reports are different in kind. Banks and other financial institutions file SARs when they observe activity that appears unusual relative to a customer’s known profile — regardless of the dollar amount. A sudden series of $9,500 cash deposits from someone who has never made cash deposits before might trigger a SAR. A wire transfer to a jurisdiction flagged for money laundering risk might trigger one. SARs are discretionary, filed based on institutional judgment, and they are confidential — the institution is prohibited from telling you that a SAR has been filed.

The 1099 reporting ecosystem is the broadest layer of financial visibility. Brokerages report investment income on 1099-B forms. Banks report interest on 1099-INT forms. Payment platforms like PayPal, Venmo, and others are required to report transactions exceeding $600 in a calendar year on 1099-K forms. This threshold, lowered from $20,000 under the American Rescue Plan Act, means that small-scale online sellers, freelancers, and gig workers now generate reports that flow to the IRS automatically. The IRS does not have to come looking for this income. It arrives on their systems as structured data, matched to your Social Security number.

The net effect is that the IRS has a remarkably complete picture of income that flows through institutions. What they see less clearly is cash-based economic activity, barter, and transactions that occur entirely between individuals without institutional intermediaries. This is not a loophole — unreported income is still legally taxable — but it is a structural fact about where visibility is high and where it is low.

The Digital Layer: What ISPs, Platforms, and Ad Networks Collect

The digital surveillance landscape is, in many ways, more extensive than the financial one, but it operates under a different logic. Zuboff’s The Age of Surveillance Capitalism documents how the dominant business model of the internet became the extraction of behavioral data — not for law enforcement, but for commercial prediction. The surveillance that most people experience daily is not governmental. It is corporate.

Your Internet Service Provider can see the domains you visit, even if the content of your communication is encrypted via HTTPS. They can see when you connect, how much data you transfer, and to which servers. Under the stored communications provisions of the Electronic Communications Privacy Act, ISPs can be compelled to provide this data to law enforcement with appropriate legal process — a subpoena, court order, or warrant, depending on the type of data and how recent it is.

The platforms — Google, Meta, Amazon, Apple — operate a layer above the ISP. They do not merely see your connection metadata. They see your searches, your messages (unless end-to-end encrypted), your location history, your purchase history, your social graph, your browsing patterns across the web via tracking pixels and advertising networks, and the behavioral inferences drawn from all of this data. Zuboff calls this “behavioral surplus” — data generated by your activity that exceeds what is needed to provide the service you are using, harvested and processed into predictions about your future behavior.

The volume of data collected by commercial platforms dwarfs anything government agencies collect directly. What makes it relevant to the enforcement question is the third-party doctrine and the growing practice of government agencies purchasing commercial data rather than collecting it through traditional surveillance channels.

The Third-Party Doctrine: The Fourth Amendment’s Blind Spot

The Fourth Amendment protects against unreasonable searches and seizures by the government. But the Supreme Court has held, in cases including Smith v. Maryland (1978) and United States v. Miller (1976), that information voluntarily shared with a third party — a bank, a phone company, an ISP — carries a reduced expectation of privacy. If you give your financial records to a bank, the reasoning goes, you have voluntarily disclosed them, and the government can obtain them from the bank without a warrant.

The 2018 Carpenter v. United States decision narrowed this doctrine somewhat, ruling that historical cell-site location information requires a warrant. But the broader principle remains largely intact: data held by companies about you is more accessible to government agencies than data held by you alone. This is the legal architecture that makes the commercial surveillance ecosystem relevant to enforcement. The data that Google, your bank, and your phone carrier hold about you is not behind the same constitutional wall as the papers in your desk drawer.

For the sovereignty-minded person, this means that the most consequential privacy decisions are not about what you tell the government directly. They are about what you tell companies, which becomes accessible to the government through legal process or, increasingly, through commercial data purchases.

The Physical Layer: Plates, Faces, and Cell Towers

Physical surveillance technology is deployed unevenly across the United States, and the gap between capability and deployment is significant. Automatic license plate readers are widespread in urban areas, operated by both law enforcement agencies and private companies that sell the data to law enforcement. Your vehicle’s location is recorded multiple times daily if you drive in a city with significant ALPR deployment. This data is typically retained for months or years, creating a retrospective location history.

Facial recognition technology is deployed by some law enforcement agencies, though its use varies dramatically by jurisdiction. Several cities have banned or restricted government use of facial recognition. The technology exists and functions with varying degrees of accuracy, but its deployment is far from universal and its use in real-time surveillance of public spaces remains more limited in the United States than in countries like China or the United Kingdom.

Cell tower data — the record of which towers your phone connects to — provides a rough location history for anyone carrying a mobile phone. The Carpenter decision requires a warrant for historical cell-site location information in criminal investigations, but the data itself is generated continuously and held by carriers as a matter of routine business operations.

What Snowden Actually Showed Us

Edward Snowden’s 2013 disclosures, documented in his memoir Permanent Record, revealed that the National Security Agency had built collection capabilities far exceeding what most Americans imagined. The NSA’s programs included bulk collection of phone metadata, the ability to query internet communications passing through major telecommunications nodes, and partnerships with technology companies that provided access to stored communications under the PRISM program.

What has changed since 2013 is worth noting honestly. The USA FREEDOM Act of 2015 ended the NSA’s bulk collection of domestic phone metadata. Technology companies expanded their use of end-to-end encryption, partly in response to the Snowden revelations. Public awareness increased, and some legal constraints tightened. The collection capabilities remain formidable, but the legal and political landscape around their use has shifted.

The more important lesson from Snowden, for purposes of this series, is structural rather than specific. The surveillance apparatus is designed for collection at scale. The bottleneck is not collection — it is analysis and action. The NSA can collect vast quantities of data. What it cannot do is have a human being look at all of it. The gap between what is collected and what is examined is enormous, and that gap is the beginning of the enforcement gap we will explore throughout this series.

Collection, Analysis, Action: The Three-Stage Funnel

This is the framework that makes the surveillance landscape intelligible rather than terrifying. Think of enforcement as a three-stage funnel. At the top, collection: the data that is gathered, reported, and stored. The collection stage is vast. Financial reports, digital metadata, commercial data purchases, physical surveillance systems — the volume of data entering the funnel is staggering.

The second stage is analysis: a human being, or increasingly an algorithm, examines collected data and identifies something that warrants further attention. This is where the funnel narrows dramatically. The IRS receives over 150 million individual tax returns annually. FinCEN receives millions of SARs and CTRs. The volume of data vastly exceeds the analytical capacity of any agency. Most collected data is never examined by a human being. Most flagged data is never acted upon. The analytical bottleneck is the single most important structural fact about enforcement in the United States.

The third stage is action: an agency decides to investigate, audit, prosecute, or otherwise act on what analysis has revealed. This is where the funnel narrows to a point. Out of millions of tax returns, fewer than a million are audited in any given year. Out of thousands of SARs, a fraction result in investigations. Out of investigations, a fraction result in enforcement actions. The conversion rate from collection to action is vanishingly small for any individual data point.

This does not mean the apparatus is toothless. When the funnel does narrow to you — when something about your activity triggers analysis and then action — the consequences can be severe. The point is not that enforcement does not happen. The point is that it happens selectively, based on specific triggers, and the selection process is governed by resource constraints, institutional priorities, and statistical patterns rather than by omniscient monitoring of individual behavior.

What This Means for Your Sovereignty

The honest assessment of the surveillance landscape is this: the capability to observe is extensive. The capacity to analyze and act is finite. For the ordinary person practicing legal sovereignty — filing accurate tax returns, maintaining legitimate business structures, using standard privacy tools — the probability of being individually targeted by the enforcement apparatus is very low. Not zero. Low.

The surveillance economy, as Zuboff documents, is primarily commercial rather than governmental. The data that companies collect about you is used to sell you things and predict your behavior, not to enforce laws against you. The governmental enforcement apparatus relies on structured reporting (1099s, CTRs, SARs) and algorithmic flagging rather than on the kind of individualized monitoring that most sovereignty-curious people imagine.

This is not an invitation to ignore the landscape. It is an invitation to understand it proportionally, to take the privacy measures that are practical and legal, and to stop performing paranoia in areas where the enforcement apparatus is not looking. The rest of this series will map exactly where the apparatus looks, what triggers its attention, and how the rational sovereign allocates their compliance and privacy effort accordingly.


This article is part of the Enforcement Gap series at SovereignCML.

Related reading: What They Bother to Look At: Attention as a Scarce Resource, The Math of Insignificance: Why You’re Not That Interesting, Financial Privacy: What’s Actually Private and What Isn’t

Read more