The Geopolitics of Surveillance: US, China, EU, and the Rest
Surveillance capitalism is not an American invention, but it was perfected on American infrastructure. Google, Meta, Amazon, Apple — the platforms that built the behavioral futures market operate from Silicon Valley, under US law, on US servers (and their global mirrors). When we discuss surveillanc
Surveillance capitalism is not an American invention, but it was perfected on American infrastructure. Google, Meta, Amazon, Apple — the platforms that built the behavioral futures market operate from Silicon Valley, under US law, on US servers (and their global mirrors). When we discuss surveillance capitalism as though it were a single, universal system, we miss something important: different states and regions have built fundamentally different relationships between corporate surveillance, government access, and individual rights. Understanding this landscape is not academic. It determines which tools actually protect you, which jurisdictions offer meaningful legal shelter, and which “privacy” solutions are privacy theater wearing a foreign flag.
The sovereign individual thinks about jurisdiction the way a prudent investor thinks about currency exposure. You do not put everything in one basket, and you do not assume that the rules governing your data today will be the rules governing it tomorrow. Diversification is not paranoia. It is the same measured response we have advocated throughout this series, applied to the geopolitical layer most people never examine.
The American Model: Corporate-Led, Government-Adjacent
The United States has no comprehensive federal privacy law as of early 2026. It has a patchwork: COPPA for children, HIPAA for health data, FERPA for education records, the CCPA/CPRA for California residents, and a handful of state-level statutes that vary in scope and enforcement. What it does not have is the equivalent of GDPR — a unified framework that treats personal data as a category worthy of comprehensive protection.
The American surveillance model is corporate-led. The behavioral futures market was built by private companies, driven by advertising revenue, and enabled by a regulatory environment that historically treated data extraction as innovation rather than intrusion. The government’s role is not to build the surveillance infrastructure — it is to access the infrastructure that corporations already built. This access comes through legal process (subpoenas, court orders, national security letters) and, as Edward Snowden documented in 2013, sometimes through mechanisms that bypass meaningful judicial oversight entirely.
The PRISM program, disclosed by Snowden, demonstrated that the NSA had direct access to data held by major technology companies — Google, Microsoft, Yahoo, Facebook, Apple, and others. The companies disputed the characterization of “direct access,” but the underlying reality was confirmed: the US government can compel American technology companies to provide user data, and the legal framework governing that compulsion — particularly the Foreign Intelligence Surveillance Act and its secret court — operates with minimal public accountability. National security letters, which come with built-in gag orders preventing the recipient from disclosing the request, add another layer of opacity.
For the individual, this means that data stored on American platforms is accessible to the US government through established legal channels and, in national security contexts, through channels that the subject may never learn about. This is documented, not assumed. The practical question is whether this matters for your specific threat model — and for most readers, the honest answer is that corporate data extraction poses a more immediate and daily impact than government surveillance. But the infrastructure exists, and pretending it does not would be dishonest.
The Chinese Model: State-Corporate Fusion
China’s approach to surveillance is structurally different from the American model, and Western media coverage of it is often simultaneously accurate on facts and misleading on scale. The Chinese government has built surveillance capabilities directly into the country’s technology infrastructure. Platforms like WeChat and Alipay are not merely corporate products that the government can access — they are designed with government access as an architectural feature, not an afterthought.
The social credit system, frequently cited in Western discussions as evidence of total surveillance, deserves more careful treatment than it usually gets. As of early 2026, the social credit system exists in various municipal and regional implementations, not as a single unified national score . Western media has often portrayed it as a comprehensive, all-seeing rating system that governs every aspect of Chinese life. The documented reality is more fragmented: multiple local systems with different criteria, uneven enforcement, and significant gaps in coverage. This does not make it benign. It means the honest description is “a developing surveillance infrastructure with real consequences in specific domains” rather than “an omniscient scoring system that controls 1.4 billion people.”
What is less disputed is the legal framework. Chinese law requires technology companies operating in China to provide data access to government authorities upon request, with no independent judicial review in the Western sense. Data localization requirements ensure that data generated by Chinese users stays within Chinese jurisdiction, accessible to Chinese authorities. For the individual outside China, the practical question is whether Chinese-origin platforms — TikTok being the most prominent — create similar exposure for non-Chinese users.
The TikTok Question
The TikTok debate, as of early 2026, sits at the intersection of documented data practices and theoretical government access [date-stamped: early 2026]. What is documented: TikTok collects extensive behavioral data on its users — content preferences, engagement patterns, device information, location data, and keystroke patterns within the app. This is not unique to TikTok; American social media platforms collect comparable data. What is also documented: TikTok’s parent company, ByteDance, is headquartered in Beijing and subject to Chinese data access laws.
What is assumed but not proven with public evidence: that the Chinese government has accessed or is accessing TikTok user data on non-Chinese users for intelligence purposes. The US government has asserted this risk in legislative and executive actions targeting TikTok . The distinction between “the legal mechanism for access exists” and “that access has been exercised to target Western users” is the gap that honest analysis must acknowledge. The risk is structural, not necessarily active — but structural risks are precisely the kind that sovereign individuals take seriously.
The proportional response to TikTok is the same proportional response we would apply to any platform whose data governance you cannot verify: use it if the value justifies the exposure, understand what behavioral surplus you are generating, and do not treat it as a secure communication channel. This applies equally to American platforms whose data practices are opaque in different ways.
The European Model: Strong Law, Weak Execution
The European Union has the strongest privacy regulatory framework in the world. GDPR, enacted in 2018, established principles that other jurisdictions have adopted or adapted: data minimization, purpose limitation, the right to erasure, the right to data portability, and requirements for explicit consent. On paper, GDPR is the closest any major jurisdiction has come to treating personal data as something individuals own rather than something corporations harvest.
In practice, GDPR enforcement is inconsistent. The regulation is enforced at the national level by Data Protection Authorities in each member state, and these authorities vary enormously in resources, technical capability, and political willingness to pursue large technology companies. Ireland, where most major American tech companies have their European headquarters, has been the primary regulator for those companies — and Ireland’s Data Protection Commission has faced persistent criticism for slow enforcement timelines and what critics describe as an accommodating posture toward the industry it regulates.
The headline fines have been large. Meta received a 1.2 billion euro fine in 2023 for transferring EU user data to the United States in violation of GDPR. That is a significant number in isolation. As a percentage of Meta’s quarterly revenue, it is a cost of business — a line item, not a deterrent. The behavioral futures market generates sufficient revenue that regulatory fines, even record-setting ones, are absorbed as operational expense rather than experienced as consequence.
For the individual, GDPR provides real tools — particularly the right to request your data and the right to request deletion. These tools work. They are slow, they require persistence, and the platforms that process these requests do so with varying degrees of completeness and good faith. But they exist, and Europeans have meaningfully more legal leverage over their data than Americans do. The gap between “legal leverage” and “practical protection” is the enforcement gap we described earlier in this series, manifest at the continental scale.
Russia, India, Brazil, and Data Localization
Several major countries have pursued data localization as a sovereignty strategy — requiring that data generated by their citizens be stored on servers within their borders. Russia’s data localization law, in effect since 2015, requires that personal data of Russian citizens be stored on Russian servers. India’s evolving data protection framework includes localization requirements for certain categories of data . Brazil’s LGPD (Lei Geral de Proteção de Dados) provides GDPR-like protections but with enforcement mechanisms that are still maturing.
Data localization serves two purposes that are in tension with each other. The first is genuine sovereignty: keeping data within a jurisdiction where domestic law applies and domestic courts have authority. The second is government access: ensuring that data is physically located where the government can compel its production. For authoritarian governments, data localization is a surveillance enablement strategy wearing sovereignty clothing. For democratic governments, it is an imperfect tool that addresses a real problem — the fact that data stored in another country is subject to that country’s access laws.
VPN Jurisdiction and the Five Eyes
This geopolitical landscape has direct practical implications for the privacy tools people use. Virtual Private Networks are the most common consumer privacy tool, and their jurisdiction matters more than their marketing. A VPN based in a Five Eyes country — the United States, United Kingdom, Canada, Australia, or New Zealand — operates under the intelligence-sharing framework that Snowden documented. This does not mean your VPN provider is handing your data to the NSA. It means the legal mechanisms for compelling that handover exist, and in national security contexts, they operate with limited transparency.
A VPN based in Switzerland, Panama, or Iceland operates under different legal frameworks — ones that generally provide stronger protections against compelled disclosure. This is not absolute protection. No jurisdiction is immune from pressure, and a VPN provider’s internal policies and technical architecture matter as much as their legal domicile. But jurisdiction is one layer of protection, and it is a layer worth considering when you are choosing where to route your traffic.
The Sovereign Takeaway
The geopolitical landscape of surveillance is not a reason for despair. It is a reason for diversification. The same principle that drives financial sovereignty — do not concentrate your assets in a single institution, a single currency, a single jurisdiction — applies to your digital life. Your email provider, your VPN, your cloud storage, your domain registrar — each of these lives in a jurisdiction, and that jurisdiction determines which laws protect your data and which governments can compel access to it.
You do not need to become an expert in international data protection law. You need to understand three things. First, where your data is stored determines which rules apply to it. Second, no single jurisdiction offers complete protection. Third, deliberate distribution of your digital infrastructure across jurisdictions — just as you would diversify financial assets — reduces your dependence on any single regulatory framework or government’s goodwill. That is proportional sovereignty applied to the map.
This article is part of the Surveillance Capitalism & The Proportional Response series at SovereignCML.
Related reading: What’s Documented vs. What’s Assumed, The Enforcement Gap: Laws That Exist but Don’t Protect You, Your Surveillance Capitalism Response Plan