Blogs

“Those who have no record of what their forebears have accomplished lose the inspiration which comes from the teaching of biography and history.” 
― Carter G. Woodson 

Historian Carter G. Woodson’s words seem prophetic when, today, a vocal minority seeks to bury the truth of our history and squash the inspiration needed to persevere in such dire times.  

Woodson, the so-called “Father of Black History,” created the precursor to Black History Month when he launched the celebration of Negro History Week in 1926. In the 1960s, Black History Month was recognized by cities, towns, universities, and schools around the country, and every president since 1976 has honored the celebration. For so many, February is about recognizing the accomplishments and contributions of Black Americans while planning for a future where equality and civil rights are inherent parts of our everyday lives. We are still working towards that future. 

The ACLU of Massachusetts’ Racial Justice Program is continuing its work to dismantle systems of inequality in our schools, our health care, our democracy — and fighting those who seek to violate the civil liberties of the marginalized.   

We work against book bans in our schools that seek to silence BIPOC and LGBTQ+ voices, thus violating the rights of all students to learn. We work in coalition with our partners to eliminate health disparities in Black maternal health outcomes. We fight to end racist policing and to eliminate racial inequities in every stage of our criminal legal system. And our BIPOC to the Ballot Box initiative gives voice to the frustrations of BIPOC voters in communities with large and growing BIPOC populations but little BIPOC elected representation.   

Black History Month is a time to recognize that our commitment to equity extends beyond a single month and beyond a single issue area. From voting rights to free expression to policing to reproductive justice, we fight the battle against systemic inequities every day. During this month dedicated to hearing Black voices, we hope that the stories told, the awareness gained, and the discussions ignited serve to fortify us all for the coming fights.  

Let us remain committed in our dedication to upholding the rights of all people. Now more than ever, we must stand up and demand equality; we must honor and celebrate our differences. We must be resolute in our ongoing commitment to creating a Commonwealth where equity becomes the norm.  

“I am ready to act, if I can find brave men to help me.”
― Carter G. Woodson 

Date

Thursday, February 1, 2024 - 2:30am

Featured image

150069_ACLUM_BHM_Graphics_1080x1080

Show featured image

Hide banner image

Override default banner image

Black Futures Banner 5.png

Related issues

Racial Justice Voting Rights Free Speech and Expression

Show related content

Tweet Text

[node:title]

Share Image

Black Futures Banner 5.png

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Show list numbers

By Carol Rose 
 
These days, it’s hard to distinguish hype from reality when it comes to GPT-4 and other breakthroughs in large language models (LLMs) that the tech companies are releasing into the world.  
 
What, if anything, makes these systems different from the algorithms that already shape our daily lives? 
 
Think of GPT-4 and other LLMs as language predictors—algorithms programmed to compute the most likely next word in a sentence. GPT-4 “learned” how to make these predictions by ingesting huge quantities of text scraped off the internet. Because the data sets used to train these algorithms are unfathomably large, GPT-4 can converse in ways that seem eerily human. That’s not surprising given that everything it has learned about which words are likely to follow other words comes from us.  
 
GPT-4’s ability to mimic human language patterns makes it great for lots of things – creating potential breakthroughs in all aspects of our society, notably in medicine and science. It’s also good for party tricks: For example, it can write an omelet recipe in the form of a Shakespearian sonnet in mere seconds. This fluency, with a wide range of different types of English language expression, makes GPT-4 almost creepily good at predicting and parroting human languages. But unlike human beings, GPT-4 does not understand words or even concepts. It has no intent. GPT-4 may seem human, but it is not. And therein lies the danger.  

Mindless machines with the ability to believably mimic humans run the risk that people will believe that whatever a machine says is true. We already know big data-trained algorithms tend to augment systemic unfairness and bias, recycling and even magnifying past injustice using new technology. But GPT-4 doesn’t just reflect human bias. It also makes things up, hallucinating falsehoods seemingly at random. Indeed, one AI expert calls it a “bullshit machine.” 
 
But so what? We already know there’s a lot of misinformation online. What’s the worst that could happen? 
 
As far as our democracy is concerned, the primary dangers arise in three areas: Truth, Extraction, and Control.  
 
First, truth. If people come to trust and rely on the machine, and the machine cannot judge truth from fiction, then we are moving into a post-epistemic or post-truth world.   
 
According to Open AI, the company that released it, GPT-4 will give existing AI systems even greater power to “reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement." Again, this is according to the people responsible for developing and unleashing this technology onto the world. 
 
It also opens the way for scammers to seize and manipulate others for power and profit. According to OpenAI, “absent safety mitigations, GPT-4 is able to give detailed guidance on how to conduct harmful or illegal activities.”  
 
Bad actors also may attempt to weaponize these powerful algorithms to take down our democracy through mass manipulation, overwhelming policy makers with human-seeming calls to action, or by misleading voters.  
 
The ability to program chatbots to fool massive numbers of people into thinking and acting on “alternative facts”—or to perpetuate “big lies”—has implications for democracy. Attempts to manipulate and even overthrow elections are not new, but GPT-4 makes it exponentially easier to manipulate and fool far more people far more often—a scary prospect with more than 50 elections worldwide in 2024, including the U.S. presidential election.  
 
According New York State Attorney General Letitia James, the broadband industry funded six companies to submit fake comments and letters to lawmakers opposing net neutrality rules. Her investigation found an astonishing 18 million fake comments were filed with the FCC, and half a million fake letters were sent to members of Congress. This fraud was discovered in part because the scheme was executed poorly. As internet security researcher Bruce Schneier observes, tools like GPT-4 will make it trivial for bad actors to pollute our democratic process with much more convincing fake comments and letters, which will in turn be much harder to detect as fraudulent.  
 
Another area of concern lies in the fundamentally extractive nature of LLMs. These algorithms owe everything they can do to the hard work of human beings all over the world. After all, they are trained on data extracted from us. But only the super-rich profit from it. The artists, musicians, writers, poets, coders, doctors, and lawyers—to name a few—will find it increasingly hard to get paid when their work is “transformed” by AI without remuneration. 
 
GPT-4 also runs the risk of exacerbating disparities in race and socio-economic status, while deepening the digital divide between those who can access the benefits of the technology and those who cannot. Given the extraordinary positive potential of GPT-4, notably in areas such as medicine, what steps do we need to ensure that everyone benefits equally from this new technology?  
 
Finally, we need to consider ways in which GPT-4 and other LLMs can be deployed as tools of control. Black and brown people, immigrants, LGBTQ people, women, and people with low incomes already disproportionately feel the negative impact of bias in algorithms that are used by law enforcement and government agencies in deciding where to deploy police, whose liberty to take away, whom to hire, and how to provide and deny social services. Ensuring that powerful actors don’t rely on biased, false, but convincing algorithms to make decisions that affect people and families will be key if we are to ensure due process and equality under the law.  
 
Now and for the foreseeable future, it is and will be up to us—including organizations like the ACLU of Massachusetts—to ensure law and policy keep pace with developments in machine learning technologies. Technology is powerful, but so is the law. And while only a handful of technology companies control the algorithms, we the people control the government. It’s up to us and our forward-thinking, public interest-supporting lawmakers to craft and implement regulations to ensure the application of these systems does not undermine legal due process or equal rights. Our laws must prioritize the welfare of humans, animals, and our planet over the private profits of a few large companies. Ultimately, we must ensure that everyone shares in the upside of these technologies, while minimizing the potential harms.  
 
Now is the time to understand, demystify, and regulate these systems, or run the risk that they will be used to take down our democracy or worse. Doing so won’t be easy, but it’s essential work that will shape our country and the world for generations to come.  

Date

Tuesday, March 28, 2023 - 8:15am

Featured image

red dots cover a large computer screen

Show featured image

Hide banner image

Show related content

Tweet Text

[node:title]

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

Show list numbers

Automatic license plate reader technology is unregulated in Massachusetts—and it implicates civil liberties.

What is an automatic license plate reader? 

Automatic license plate readers (“ALPR”) enable private companies and government agencies to keep track of where people drive and when. The technology uses special cameras that are either mounted to stationary locations like traffic lights or affixed to moveable objects like cars and trucks. These cameras capture images of license plates and convert them to text files using optical character recognition technology. The original image, text file, and associated metadata such as date, time, and geographic coordinates are then added to a database. These databases are used by private industries like insurance, towing, and repossession to locate cars and perform investigations. They are also used by police departments to conduct dragnet surveillance of motorists.  

How do police use license plate readers? 

Police primarily use ALPRs in two ways: to track cars in real time and to track the past locations of cars. When police seek to find a particular car in real time, they can add information about that car to an ALPR list maintained by their department or another agency. These lists, sometimes called “vehicle of interest” lists or “hot lists,” are generated, aggregated, and shared by agencies like local police departments, the FBI's Criminal Justice Information Services Division, and more. The lists can include stolen cars, cars associated with Amber Alerts, and other vehicles of interest to police. When a car on one of these lists passes by a license plate reader, it generates an alert, also known as a “hit.” Depending on how an ALPR system is configured, a hit notification can be sent to an individual police officer or entire department for follow-up action.  

Automatic license plate readers are also used to track the past locations of vehicles, enabling dragnet surveillance of millions of people. Police departments and intelligence agencies can access stored ALPR data dating back years in government and private company databases. One private company, Vigilant Solutions, boasts that its license plate reader database has billions of records of motorists’ movements, collected from states across the country. Police can pay to access these records on a subscription basis. Police can also collect their own ALPR data, which they can share with other government agencies and even private companies.  

Some states have passed laws limiting how long police can retain license plate reader data and how they can share it and use it. In Massachusetts, lawmakers have filed ALPR legislation for many years, but to date, the technology remains entirely unregulated at the state level.  

Why should I be concerned about police use of license plate readers? 

This technology impacts several civil liberties—from freedom of expression and association to freedom from unfettered surveillance, from protecting the privacy of people seeking abortion care in Massachusetts to defending the rights of immigrants across the country. In recent years, for example: 

  • The Virginia state police used license plate readers to track people’s attendance at political events;  
  • The New York Police Department used license plate readers to keep track of who visited certain places of worship, and how often; 
  • Federal immigration authorities bought license plate reader data and used it to track immigrants across the country. 

The need for the public to have detailed information regarding this technology has become even more important after the U.S. Supreme Court’s decision overturning Roe v. Wade. With several states banning or severely limiting abortion, ALPR technology now poses a greater privacy threat. Police and private individuals in some states may seek location data showing people traveling to Massachusetts for reproductive care in order to use that evidence for civil and criminal penalties. 

Despite all this surveillance, ALPR technology has been repeatedly shown to be unreliable; like other police technologies, ALPRs can and do make mistakes. Use of ALPRs by law enforcement across the country has been tainted with high error rates and improper identification of purportedly stolen vehicles. On multiple occasions, people have been traumatized after police pulled guns on them at traffic stops, relying on faulty data from license plate reader systems. Here in Massachusetts, the use of the technology has been marred by years of inaccurate timestamp data.   

Is this technology regulated? 

There is currently no statute in Massachusetts regulating police use of ALPRs.  

In 2020, the state Supreme Judicial Court recognized that Massachusetts’ use of this technology could implicate constitutional protections against unreasonable searches and seizures. Specifically, the court noted that “[w]ith enough cameras in enough locations, the historic location data from an ALPR system in Massachusetts would invade a reasonable expectation of privacy and would constitute a search for constitutional purposes.” 

In December 2022, the ACLU filed a public records lawsuit against the Massachusetts Executive Office of Public Safety and Security and the Department of Criminal Justice Information Services to obtain information about the state’s ALPR programs.  

Date

Monday, December 12, 2022 - 6:15pm

Featured image

licenceplatereader

Show featured image

Hide banner image

Related issues

Privacy and Surveillance Police Accountability

Show related content

Tweet Text

[node:title]

Share Image

licenceplatereader

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

Show list numbers

Pages

Subscribe to RSS - Blogs