Brown University student Amara Majeed awoke on a Thursday in late April to find that she had been falsely accused of perpetrating the Easter terror attacks in Sri Lanka. Her photo was disseminated internationally and she received death threats, even though she was at school in Providence, Rhode Island during the attacks, not in Sri Lanka. Later, it was revealed that the Sri Lankan police had used facial recognition software, leading them to mistakenly identify Amara as one of the bombers.

Amara’s story is an extreme example of what can go wrong when government agencies use untested, often unreliable, and racially biased facial recognition algorithms behind closed doors, absent public debate and regulations. Despite the significant risks and the legal and regulatory vacuum, law enforcement agencies, corporations, and others have embraced facial recognition technology, meaning it is fast becoming more prevalent in public life.

Probably due to its extreme creep factor, government agencies and corporations don’t generally advertise when and how they deploy face surveillance. But the companies that manufacture the tools need to sell them, making it is easier to find information on the companies building and selling facial recognition technology than those using it. I did some research to see what’s new in the space, and found that it’s a hot market, with gargantuan tech firms like Amazon competing against startups you’ve probably never heard of.

Some of those startups are working on products that are downright frightening. A company called TVision Insights, for example, is working on a product to track viewers watching television in their homes. Households who “opt in” are monitored by TVision software whenever the TV is on. TVision claims to be able to tell who in the house is watching the TV and whether that person is actually paying attention to the screen. The website says the “computation software can be easily integrated into the graphic processing unit of any web camera. Once installed, their AI technique tracks how many people are watching, their attention level, even their emotions, all in real time,” the company boasts. What does it mean to “opt-in” to this surveillance? It’s unclear, as TVision’s website doesn’t get into details. It’s possible the “opt-in” takes place when someone buys a smart TV, turns it on, and then quickly “agrees” to the Terms of Service in order to watch their football game or nightly news program.

Other companies are working on two-factor authentication using facial recognition technology. In this case, a face scan replaces a user entering a code texted to them or inserting a card into a reader to validate their password. In this scenario, users are at least aware that their faces are being scanned. But nonetheless, the technology could cause problems for people. First of all, a user’s choice to engage with a system like this depends on where the technology is deployed. If businesses decide to force their workers to use these systems, for example, it doesn’t leave workers with much choice. Workers may understandably not want to hand over their biometric information to their employer or a third party to get a paycheck. Equally troubling is a product under development by a company called Voatz, which is building technology to “make voting safer.” Their system currently allows voters to identify themselves with biometric data, either a fingerprint or facial recognition.

Facial recognition technology is also popular with video surveillance companies and companies making police technology. Among those is the Cambridge, Massachusetts based firm Suspect Technologies. Suspect advertises an “AgentAI” platform to assist police officers. In documents obtained by the ACLU of Massachusetts from the Plymouth Police Department, the CEO of the company admitted his tool might only work about 30 percent of the time, but nonetheless wanted to test it on unsuspecting people minding their business in public spaces.

Indeed, facial recognition failures in government do not only occur in developing countries like Sri Lanka. The MTA deployed facial recognition technology in New York City earlier this year and their technology failed to correctly recognize a single face. In the UK, officials acknowledged their pilot test of facial recognition was a disaster—with a 96 percent failure rate.

Although the facial recognition and security industries tout retail as a promising market for their products, most retailers refuse to discuss whether they use facial recognition technology. In 2018 the ACLU asked the 20 largest retailers in the US whether they use facial recognition technology. 18 of the companies refused to answer, and only one said that they did not use any facial recognition technology in their stores.

Government agencies in the United States are subject to the public records law, giving organizations like the ACLU a window into their adoption of surveillance technologies like face recognition. But private companies are not subject to open records laws, and most companies deploying facial recognition technology do not want the public to know whether or not they’re using the tool. There are a few exceptions, however.

Sak’s Fifth Avenue is known to use facial recognition in their stores. Live video feeds in stores are monitored from company headquarters in New York and facial recognition is used to run customer faces against those in a private database that Sak’s has of “past shoplifters or known criminals.” It is unclear whether Sak’s stores in Massachusetts use this technology. Recently Apple was accused in a lawsuit of misidentifying a teen as a shoplifter and arresting him for a theft that occurred while he was attending his high school prom. The case is ongoing, but it presents another example of the harms of jumping to conclusions based on facial recognition software.

Retailers are interested in tracking their customers as well. Walgreens stores in Chicago are piloting a technology developed by the company Cooler Screens. The company uses facial analysis software to determine the demographics of customers looking at the beverage refrigerators in its stores, including using software to track eye movements. Thanks to a pioneering biometric privacy law in Illinois, the Biometric Privacy Act (BIPA), Cooler Screens and Walgreens cannot use recognition technology to individually identify customers, but that feature could be used in stores in other states. In the absence of laws regulating the use of facial recognition technology, companies are working to bring more facial recognition into their stores. Walmart has filed a patent for a system to capture customers’ facial information while they wait to checkout.

There are many important unanswered questions about the use of facial recognition technology by the government and by corporations. If someone mistakenly ends up in a private database tagged as a criminal they likely will not even know. This could cause them to be harassed in stores, possibly around the country as retailers form information sharing networks. Due to the secrecy surrounding even the existence of these systems, an innocent person has no way of appealing their inclusion in such a database. These scenarios are not mere hypotheticals; facial recognition misidentifications are all too common.

As companies and government agencies continue to adopt this technology in more contexts, ordinary people may have little recourse to escape facial capture—little recourse, that is, besides democratic engagement. The most important thing we can do right now to stop oncoming dystopia is pass laws to make sure our rights keep pace with technological advancement and protect ourselves from flawed software that violates our privacy.

As a start, the ACLU of Massachusetts is advocating for a moratorium on facial recognition technology use by the state government. We need to press “pause” until the legislature has passed rigorous protections to protect our rights. Yesterday, San Francisco became the first city in the country to ban the government’s use of face surveillance tech, and Somerville, Massachusetts looks like it will soon do the same.

Technology brings many benefits, but it also comes with risks. The only way forward is to get government involved, to make sure our rights in this democracy are paramount—no matter what companies like Amazon have to say about it.

This blog post was written by ACLU of Massachusetts intern Alex Leblang. Originally published on Privacy SOS.