Politics & PolicyApril 18, 2026
Watched, Tracked, Sold: Surveillance, Data Collection, and the Quiet System Built Around You
By TRTSKCS Editorial
Watched, Tracked, Sold: Surveillance, Data Collection, and the Quiet System Built Around You
There is a strange contradiction at the center of modern life, and most people feel it even if they do not always know how to explain it. We live in a time where everything is supposedly more connected, more personalized, more efficient, more convenient, and more intelligent than ever before, yet at the same time, almost every convenience we enjoy now comes with an invisible trade that most people never fully agreed to, because behind nearly every app, every smart device, every loyalty card, every search engine, every website, every GPS route, every streaming recommendation, every social media feed, every digital payment, every smart television, every fitness tracker, every security camera, every voice assistant, and every free online service is a system designed to collect, study, store, predict, and profit from human behavior.
Surveillance and data collection are no longer separate ideas. They are now part of the same machine.
Most people still think of surveillance in the old-fashioned sense, as a person sitting in a dark room watching a camera feed, or a government agent listening to a phone call, or a detective following someone around in an unmarked car, but the reality is far bigger, far quieter, and far more sophisticated than that. Surveillance today often does not feel like surveillance at all. It feels like convenience. It feels like your phone already knowing where you work and where you live. It feels like your bank warning you about a suspicious purchase. It feels like your favorite streaming service knowing what show you will probably binge next. It feels like social media knowing exactly what kind of post will keep you scrolling for another hour. It feels like your maps app telling you where traffic is before you hit it. It feels like a smart watch reminding you to stand up, drink water, or sleep more.
The problem is that every one of those conveniences creates data, and once that data exists, somebody wants access to it.
For the average person, the sheer amount of information being collected every single day would be difficult to even imagine. Your phone tracks your location. Your browser tracks your searches. Websites track your clicks. Stores track your purchases. Your car tracks your driving habits. Smart televisions track what you watch. Smart speakers track what you say. Employers increasingly track keystrokes, productivity, and time spent on certain tasks. Insurance companies track how you drive, how often you exercise, and in some cases, how healthy they think you are likely to be. Banks track what you buy, where you buy it, and what your spending habits reveal about your lifestyle. Social media companies track what you like, what you linger on, what makes you angry, what makes you laugh, who you interact with, what you share, and even what you type before you decide not to post it.
By itself, one piece of data often means very little. One search does not define you. One purchase does not explain your life. One location ping does not tell much of a story. But when thousands of these data points are collected together over years, they can reveal things about a person that even their own friends and family may not know. Patterns emerge. Routines become obvious. Habits become measurable. Weaknesses become predictable. A company may be able to tell if someone is pregnant before they announce it. A bank may know if someone is struggling financially before they admit it. A social media platform may know if someone is depressed before they even realize it themselves.
That is where surveillance becomes something much bigger than just observation. It becomes prediction.
The world has quietly moved from watching people to modeling them.
Governments do it for security, policing, immigration enforcement, intelligence gathering, and public safety. Companies do it for profit, advertising, risk analysis, personalization, and product development. Political campaigns do it to influence behavior. Employers do it to maximize efficiency. Insurance companies do it to price risk. Retailers do it to drive purchases. Social media companies do it to keep people engaged. Everybody wants the same thing in the end, which is insight into what people will do before they do it.
That is why metadata matters so much. People often think metadata is harmless because it is not necessarily the content of what was said, but metadata can reveal just as much, if not more. It may not tell someone exactly what was said in a text message, but it can reveal who was contacted, when, how often, from where, for how long, and in what pattern. Enough metadata can paint a picture of a person’s life with frightening accuracy.
Then there are the data brokers, which are perhaps one of the least understood and most disturbing parts of this entire system. Most people have no idea how many companies exist solely to collect, package, analyze, and sell information about them. These companies buy and gather data from apps, websites, public records, financial transactions, credit bureaus, loyalty programs, mobile devices, social media platforms, and countless other sources, then turn around and sell that information to advertisers, marketers, employers, insurance companies, political groups, financial institutions, and in some cases, even government agencies.
That last part matters because one of the biggest criticisms of modern surveillance is that governments can sometimes bypass traditional warrant requirements simply by buying commercially available data from private brokers. In other words, instead of obtaining a warrant to access someone’s location history, search history, or app usage, an agency may be able to purchase it from a company that already collected it. That creates a dangerous loophole, because it means the protections people think they have under the Constitution may not always apply the way they assume.
The legal side of all of this is complicated because the United States still does not have one unified federal privacy law that covers everybody equally. Instead, there is a patchwork of federal laws, state laws, industry-specific regulations, court rulings, and legal gray areas that often leave people confused about what rights they actually have.
At the federal level, there are laws like the Privacy Act of 1974, which governs how federal agencies collect and use personal information. There is HIPAA, which covers medical records and health information. There is COPPA, which protects children under the age of thirteen online. The Fair Credit Reporting Act regulates how consumer credit information is collected and shared. The Gramm-Leach-Bliley Act governs how financial institutions handle customer data. The Electronic Communications Privacy Act and the Stored Communications Act deal with emails, phone calls, and electronic records. The federal Wiretap Act is increasingly being used in lawsuits involving website tracking tools, session replay software, and ad pixels that monitor user behavior.
Even older laws like the Video Privacy Protection Act have become newly relevant because courts are now using them to examine whether streaming services and websites improperly share viewing behavior with advertisers.
Then there is FISA, especially Section 702, which has become one of the biggest surveillance battles in Congress. Section 702 allows intelligence agencies to monitor communications involving foreign targets overseas without a warrant, but because Americans often communicate with people abroad, Americans can still get swept into those systems. Critics argue this creates a backdoor for warrantless searches of Americans’ emails, texts, calls, and internet activity. Supporters argue it is necessary for national security, terrorism prevention, and intelligence gathering. Congress continues to fight over whether Section 702 should be renewed, reformed, narrowed, or expanded, but one reality remains true. Surveillance powers almost never shrink once they exist. They tend to expand.
Congress is also debating broader national privacy legislation that would give consumers stronger rights to know what information companies collect, demand deletion of personal information, opt out of targeted advertising and data sales, limit how companies use biometric information, and require more transparency around artificial intelligence and automated decision-making. The problem is that Congress has struggled for years to agree on a single national privacy standard.
Because Congress has failed to act comprehensively, states have started creating their own privacy laws instead.
California remains the strongest and most aggressive state when it comes to privacy rights. The California Consumer Privacy Act, along with the California Privacy Rights Act, gives consumers the right to know what data companies collect, request deletion of personal information, correct inaccurate information, opt out of the sale or sharing of data, and limit the use of sensitive information. California also has separate laws involving biometric information, data brokers, and wiretap-style privacy claims.
Illinois has one of the strongest biometric privacy laws in the country through the Biometric Information Privacy Act, commonly called BIPA. That law requires companies to get consent before collecting fingerprints, facial scans, voiceprints, or other biometric identifiers, and it has become the basis for major lawsuits against employers and technology companies.
By 2026, roughly twenty states now have broad privacy laws in effect, including California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Texas, Florida, Maryland, Minnesota, Montana, Oregon, Delaware, New Hampshire, New Jersey, Kentucky, Nebraska, and Rhode Island. Indiana, Kentucky, and Rhode Island all added new privacy laws that took effect in 2026. States are increasingly adding new protections around health data, reproductive health information, children’s privacy, biometric information, deepfakes, AI systems, automated decision-making, data brokers, and the collection of location information.
All fifty states now also have data breach notification laws, which generally require companies to tell consumers when sensitive information has been exposed or stolen.
The larger issue is that privacy law in America is becoming more fragmented, not less fragmented. Your rights may depend on where you live, what kind of data is being collected, what company is collecting it, and whether lawmakers can move fast enough to keep up with technology.
That is the part many people find difficult to accept. Once a government, a corporation, or a technology platform has access to a new form of data, it becomes incredibly difficult to convince them to stop using it. If a company can make more money with data, it will usually collect more data. If a government believes more surveillance can prevent crime or terrorism, it will usually push for broader authority. If people are willing to trade privacy for convenience, the market will continue moving in that direction.
So what can actually be done about it?
The honest answer is that surveillance is not going away. The idea that society can somehow return to a world without cameras, without smartphones, without online tracking, without biometric systems, without digital profiles, without location data, or without predictive algorithms is unrealistic. That world is already gone.
But that does not mean people are powerless.
What can be done is limiting who has access to data, how long they can keep it, what they can use it for, and whether they need permission before collecting it in the first place. Stronger privacy laws matter. Warrant requirements matter. Transparency matters. Giving people the right to know what is being collected about them matters. Giving people the right to delete their data matters. Limiting facial recognition in public spaces matters. Regulating data brokers matters. Forcing companies to explain how algorithms make decisions matters. Stronger cybersecurity matters because the more data that exists, the more valuable it becomes to hackers, scammers, criminals, and hostile governments.
There is also an individual side to this. People can use privacy-focused browsers, encrypted messaging apps, VPNs, password managers, two-factor authentication, ad blockers, privacy settings, and more selective app permissions. They can think more critically about what they post publicly, what they connect to the internet, what devices they bring into their homes, and how much convenience they are willing to trade for privacy.
But even those choices only go so far because the next phase of surveillance is going to be powered by artificial intelligence, and that is where everything becomes even bigger.
AI is going to play an enormous role in surveillance because AI is not just good at collecting data. It is good at making sense of it.
A human being can only process so much information. An AI system can process billions of pieces of information at once, identify patterns, detect anomalies, recognize faces, analyze speech, predict behavior, flag suspicious activity, track movement across multiple camera feeds, identify emotional changes in someone’s voice, determine risk scores, analyze financial transactions, monitor online activity, and create detailed profiles in real time.
Facial recognition is already becoming more accurate. Voice recognition is improving rapidly. Predictive policing tools are becoming more common. AI systems can already monitor crowds, identify individuals in public, track social media posts, detect unusual behavior, and sort through massive amounts of surveillance footage faster than any human ever could. Employers may increasingly use AI to monitor worker productivity. Banks may use AI to determine lending risk. Insurance companies may use AI to determine premiums. Governments may use AI to monitor public behavior. Retailers may use AI to predict what people will buy before they even know they want it.
The deeper concern is that AI does not just watch what people do. It may eventually shape what people do.
If an algorithm decides what news you see, what products you are shown, what ads appear in front of you, what jobs you are recommended, what content gets amplified, what opinions you are exposed to, what credit offers you receive, what routes you drive, and what people you interact with online, then surveillance stops being passive and starts becoming something more powerful. It starts becoming influence.
That is why this conversation matters.
The future is not simply going to be about whether people are watched. They already are. The real battle is going to be over who controls the systems doing the watching, who profits from them, who gets protected by them, who gets targeted by them, and whether ordinary people are ever truly given a meaningful choice in the matter.
Because once surveillance becomes invisible, profitable, and automated, it no longer feels like something happening around you.
It becomes the environment itself.
There is also another argument that appears every single time this conversation comes up, and it is the idea that if someone is not doing anything wrong, then they should not care about privacy in the first place. On the surface, it sounds logical. If you are innocent, why would it matter if someone is watching?
The problem is that this way of thinking completely misunderstands what privacy actually is.
Privacy is not about hiding wrongdoing. Privacy is about having control over what parts of your life belong to you, when they are shared, who gets access to them, and how they are interpreted. Most people who claim they have nothing to hide would still never willingly hand a stranger their bank statements, text messages, search history, private photos, passwords, medical records, dating life, financial problems, browser history, location history, or every thought they have ever typed into a search bar. Not because they are criminals, but because privacy is tied to dignity, autonomy, reputation, freedom, and the simple idea that some parts of life should remain yours.
The deeper issue is that surveillance is rarely just about catching criminals. Data can be misunderstood. It can be taken out of context. It can be sold, leaked, hacked, or weaponized. A person may search for depression because they are worried about a loved one, yet an algorithm may decide they are mentally unstable. Someone may visit a hospital repeatedly because a family member is sick, while an insurance company may interpret that as a personal health risk. Someone may attend a protest, search for bankruptcy advice, visit a lawyer, research addiction recovery, or explore controversial political ideas, and suddenly those actions become part of a permanent profile that follows them for years.
The real question is not whether you are doing something wrong. The real question is who gets to decide what wrong even means.
History is full of examples where governments, employers, schools, communities, and corporations changed the definition of suspicious, dangerous, immoral, or unacceptable behavior. What is considered acceptable today may not be considered acceptable tomorrow. Beliefs that are normal today may become controversial later. If every action, search, message, and location is permanently recorded, people lose the ability to experiment, make mistakes, change, grow, or reinvent themselves.
Privacy also protects freedom of thought. People behave differently when they believe they are constantly being watched. They become more careful, more conformist, more fearful of saying the wrong thing, more afraid to challenge authority, and less willing to ask difficult questions or explore unpopular ideas. That is why surveillance becomes dangerous even when someone has done absolutely nothing wrong.
A world without privacy is not just a world where wrongdoing is easier to catch.
It is a world where everybody slowly begins acting like they are always being judged.
Share