- Unaligned Newsletter
- Posts
- State of AI Surveillance in the United States
State of AI Surveillance in the United States
Thank you to our Sponsor: Morphic

The state of AI surveillance in the United States in early 2026 is defined by a fast expansion of capability paired with uneven rules and rising political conflict. AI has made surveillance cheaper, faster, and more scalable by automating tasks that used to require large teams, like scanning video, matching faces, reading license plates, and linking scattered records. At the same time, the United States still has no single comprehensive federal law that clearly governs when and how government agencies can deploy AI driven surveillance across everyday life. The result is a patchwork where oversight depends on which agency is acting, which state you are in, what data source is used, and whether the surveillance is framed as national security, criminal enforcement, border enforcement, or public safety.
What has changed is not only the technology. What has changed is the operational posture. Agencies are increasingly treating AI as an always on analytic layer that can sit on top of existing cameras, databases, and identity systems. This is why many civil liberties debates have shifted away from the question of whether surveillance exists, and toward how automated and continuous it has become.
Where AI surveillance shows up most today
Video surveillance plus analytics
The United States already has massive camera coverage across cities, stores, transportation hubs, and private buildings. AI makes this footage searchable at scale. Video analytics can detect objects, track movement across time, and support identity matching when combined with facial recognition.
Common capabilities include:
• Finding a person across many cameras by clothing, gait, or face
• Flagging behaviors like loitering or running, which can be subjective and context dependent
• Building timelines of movement through spaces
• Creating searchable archives where anyone becomes a query
The biggest practical shift is the move from passive recording to active triage. Instead of a human watching a wall of screens, the system produces alerts, scores, and candidate matches. That changes how surveillance is used day to day. It also changes the threshold for use. When the marginal cost of scanning footage is close to zero, scanning becomes routine rather than exceptional.
Automated license plate readers and location tracking
Automated license plate reader systems capture plates and store time and location, which enables retroactive tracking. Many systems also integrate with watchlists, hotlists, and investigative platforms that can connect plate sightings to case files.
Key concerns include:
• Long retention periods that turn routine travel into a persistent record
• Sharing across agencies that expands the footprint beyond the original purpose
• Private sector involvement, where data can flow between commercial and government systems
• Weak or inconsistent rules about who can run searches and why
The core issue is that location history is intensely revealing. It can expose religious practice, medical visits, political meetings, intimate relationships, and patterns of life. AI makes the analysis of these trails faster, and it also makes it easier to infer identities even when a person is not explicitly named in a dataset.
Facial recognition and biometric identification
Facial recognition remains one of the most visible flashpoints because it directly ties surveillance to identity. The legal environment remains fragmented, with different states and cities setting different limits and enforcement regimes.
Where it is often used or debated:
• Law enforcement investigations that compare photos or video stills to motor vehicle databases
• Security screening in venues, retail, and private buildings
• Airports and travel biometrics, where opt out rules and transparency can vary by program
• Identity verification workflows that blur the line between authentication and surveillance
The consumer facing debate often focuses on accuracy, and accuracy matters. But the deeper debate is about governance and scale. Even a high accuracy system can produce rights harms if it is deployed broadly, used without strict purpose limits, or integrated into systems that enable persistent tracking. And even a low error rate can become a large number of errors when the system is applied to millions of people.
Data fusion across many databases
A major acceleration point is not one sensor. It is linkage. AI makes it easier to connect records across separate systems, such as phone data, purchase histories, public records, social media, and location trails. Even when each data set is legal to access in isolation, the combined system can create a level of visibility that feels like continuous surveillance.
Two themes matter here. One is the data broker pathway, where agencies can purchase sensitive data rather than compel it through warrants. Another is the creation of unified investigative platforms that allow analysts to query across many sources in one place.
Common data fusion patterns include:
• Identity resolution that links slightly different names, addresses, and identifiers
• Network analysis that maps associations between people, devices, and locations
• Pattern detection that flags “unusual” behavior, often without a clear baseline
• Risk scoring that converts a messy set of signals into a single number
This is the point where AI surveillance starts to feel less like watching and more like profiling. The system is not only observing. It is categorizing, predicting, and triaging attention.
Thank you to our Sponsor: EezyCollab

Border and immigration enforcement
Border and immigration contexts are often where surveillance tools are deployed early and aggressively, in part because of mission framing and operational demand. These programs may involve biometrics, watchlists, automated vetting, and large-scale data integration.
Typical uses include:
• Identity verification and screening
• Travel history and association analysis
• Location and movement tracking near borders or within enforcement operations
• Document analysis and automated case triage
This area is often a policy pressure cooker. Expansive authorities and urgent mission framing can lead to faster deployment, while due process and transparency concerns can lag behind.
The federal governance picture is getting more formal but remains uneven
There is a clear trend toward federal level AI governance requirements, inventories, and risk management practices. However, these do not automatically translate into strong limits on surveillance itself. Many governance frameworks focus on internal controls and reporting rather than hard restrictions on certain uses.
A useful way to think about federal governance today is that it is stronger on process than on prohibitions. Agencies are increasingly asked to classify AI systems by impact, document uses and create oversight structures. Yet whether a system should exist at all in a given context is still a contested and often politically charged question.
The legal and political pressure points shaping 2026
National security surveillance rules
Foreign intelligence authorities remain one of the most important legal foundations for large scale data collection and analysis. The relevance to AI is straightforward. AI increases the power of analysis on collected data. Even if collection rules do not change, analytic scale changes what the system can do.
Key fault lines include:
• How searches involving US person data are conducted and audited
• Whether certain queries should require judicial approval
• How long data is retained and how broadly it is shared
• Whether oversight structures can keep pace with analytic capability
The fight over “mass domestic surveillance” as a red line
A major recent flashpoint shows how surveillance concerns are now appearing inside government contracting, not just legislation. Some AI companies have tried to put use restrictions into contracts, including restrictions on domestic mass surveillance and fully autonomous weapons. At the same time, parts of government have asserted that contractors should not impose limits beyond what the law requires, especially in defense contexts.
This matters because it reveals a deeper tension. Is the limit set by law and policy alone, or can a technology provider insist on additional restrictions as a condition of access. If this conflict continues, it could shape how future government AI procurement is structured, including which companies participate and what kinds of safeguards become standard.
State privacy and biometric laws
States continue advancing privacy and AI related laws that shape data access, retention, transparency, and consumer rights. These laws are not all focused on surveillance, but they can indirectly constrain surveillance by limiting how data is collected, sold, or repurposed.
State level activity matters because it can create de facto national standards. Large companies and agencies sometimes adopt the strictest state standard operationally rather than maintain many different regimes. Still, the result is uneven protection depending on where a person lives and what systems touch their data.
Public trust and surveillance fatigue
Public concern is rising not only about cameras but about automation that quietly scales monitoring. When people believe they are trackable everywhere, it changes behavior. It can chill lawful activity, reduce participation in civic life, and undermine trust in institutions. It also changes the politics around AI, policing, and platforms.
What is genuinely new because of AI
The newest shift is that surveillance is moving from review to prediction. Traditional surveillance was often forensic. Something happened, then footage was reviewed. AI enables real time alerts and risk scoring. That creates new risks.
Core AI driven risks include:
• Automation bias, where humans trust the system too much
• Hidden error rates that are hard to contest in the moment
• Feedback loops, where targeted surveillance generates more “hits” that justify more targeting
• Proxy discrimination, where seemingly neutral signals replicate biased outcomes
• Chilling effects, where people avoid lawful activities because they feel monitored
AI also makes surveillance more modular. If a policy bans one technique, another pathway can recreate similar power. Limiting facial recognition does not necessarily prevent tracking if location trails, license plate data, and video analytics can still link identity through other means.
Practical safeguards that matter most
If you want a clean framing, treat the state of US AI surveillance as a contest between scale and safeguards. The safeguards that materially change outcomes are operational controls that constrain how systems are used, not broad ethics language.
Controls that make a real difference:
• Strict purpose limitation, so tools cannot drift into broader monitoring
• Data minimization, collecting only what is necessary for the defined purpose
• Short retention by default, with logged, reviewable exceptions
• Access controls that limit who can run sensitive searches
• Audit logs that record who searched what, when, and for what stated reason
• Independent testing for false positives and disparate impact in real conditions
• Clear escalation and appeal paths for people flagged incorrectly
• Procurement standards that require transparency about models, error rates, and updates
• Separation of untrusted inputs from action systems, so alerts do not automatically trigger high impact actions
In practice, retention and auditing often matter more than almost anything else. If data is retained for years and searches are not rigorously logged and reviewed, capabilities will be used broadly over time even if the original intent was narrow.
What to watch next
Over the next year, the most important story is not whether AI surveillance grows. It likely will. The key story is what gets constrained, where, and by whom. Federal intelligence law debates, agency inventory regimes, state privacy enforcement, and high profile procurement battles are converging into a single question. Who sets the limits when the tools can watch at scale.
A concise checklist of what to track:
• Whether laws and policies tighten rules on sensitive queries involving Americans
• Whether stronger limits emerge around the purchase of sensitive data
• Whether states expand restrictions on biometric surveillance and location retention
• Whether agency AI inventories become more detailed and more enforceable
• Whether contracts for government AI systems include enforceable bans on domestic mass surveillance, and how those bans are defined
• Whether independent audits become common, especially for high impact deployments
The state of US AI surveillance in 2026 is not one story. It is many stories that share a pattern. AI is turning surveillance into software, which means it can scale quickly and quietly. The policy response is catching up, but it is uneven. The outcome will depend on whether governance moves beyond principles into enforceable limits, transparent inventories, and real accountability for how these systems are deployed and used.
Looking to sponsor our Newsletter and Scoble’s X audience?
By sponsoring our newsletter, your company gains exposure to a curated group of AI-focused subscribers which is an audience already engaged in the latest developments and opportunities within the industry. This creates a cost-effective and impactful way to grow awareness, build trust, and position your brand as a leader in AI.
Sponsorship packages include:
Dedicated ad placements in the Unaligned newsletter
Product highlights shared with Scoble’s 500,000+ X followers
Curated video features and exclusive content opportunities
Flexible formats for creative brand storytelling
📩 Interested? Contact [email protected], @samlevin on X, +1-415-827-3870
Just Three Things
According to Scoble and Cronin, the top three relevant and recent happenings
OpenAI Lands $110B Mega Round as Amazon, Nvidia, and SoftBank Deepen Partnership
OpenAI announced a $110 billion funding round led by Amazon ($50B), Nvidia ($30B), and SoftBank ($30B), valuing the company at about $730 billion pre money. Alongside the financing, OpenAI and Amazon unveiled a multiyear strategic partnership that expands OpenAI’s AWS spend and makes AWS the exclusive third party cloud distribution provider for OpenAI’s enterprise platform Frontier, while OpenAI said its Microsoft partnership remains unchanged. The funding is framed as fuel for massive compute and infrastructure needs as competition from Google and Anthropic intensifies. CNBC
OpenAI Wins Pentagon Deal as Anthropic Is Banned by Trump
President Trump ordered federal agencies to stop using Anthropic’s AI and the Pentagon moved to label the company a national security supply chain risk after a dispute over Anthropic’s attempts to bar its models from being used for domestic mass surveillance or fully autonomous weapons. Within hours, OpenAI announced a Defense Department deal to deploy its AI on classified networks, saying its contract includes safeguards such as prohibitions on mass domestic surveillance and requiring human responsibility for the use of force, while Anthropic said it will challenge the government’s designation in court. NPR
AI Helps Mexico Identify the Missing Through Tattoos and Rebuilt Faces
AI tools are increasingly being used in Mexico to help families and authorities search for missing people by speeding up identification and cross referencing fragmented records. Projects include AI that matches and classifies tattoos on unidentified bodies, systems that extract key details and link cases across separate investigation files and databases, image restoration that reconstructs faces to make morgue photos less traumatic and more usable for identification, and age progression models that predict how missing children might look years later. Supporters stress these tools can improve speed and coordination, but they are not a substitute for broader institutional action and accountability in a country with a large and ongoing disappearance crisis. CNN
Scoble’s Top Five X Posts






