By Carol Rose 
 
These days, it’s hard to distinguish hype from reality when it comes to GPT-4 and other breakthroughs in large language models (LLMs) that the tech companies are releasing into the world.  
 
What, if anything, makes these systems different from the algorithms that already shape our daily lives? 
 
Think of GPT-4 and other LLMs as language predictors—algorithms programmed to compute the most likely next word in a sentence. GPT-4 “learned” how to make these predictions by ingesting huge quantities of text scraped off the internet. Because the data sets used to train these algorithms are unfathomably large, GPT-4 can converse in ways that seem eerily human. That’s not surprising given that everything it has learned about which words are likely to follow other words comes from us.  
 
GPT-4’s ability to mimic human language patterns makes it great for lots of things – creating potential breakthroughs in all aspects of our society, notably in medicine and science. It’s also good for party tricks: For example, it can write an omelet recipe in the form of a Shakespearian sonnet in mere seconds. This fluency, with a wide range of different types of English language expression, makes GPT-4 almost creepily good at predicting and parroting human languages. But unlike human beings, GPT-4 does not understand words or even concepts. It has no intent. GPT-4 may seem human, but it is not. And therein lies the danger.  

Mindless machines with the ability to believably mimic humans run the risk that people will believe that whatever a machine says is true. We already know big data-trained algorithms tend to augment systemic unfairness and bias, recycling and even magnifying past injustice using new technology. But GPT-4 doesn’t just reflect human bias. It also makes things up, hallucinating falsehoods seemingly at random. Indeed, one AI expert calls it a “bullshit machine.” 
 
But so what? We already know there’s a lot of misinformation online. What’s the worst that could happen? 
 
As far as our democracy is concerned, the primary dangers arise in three areas: Truth, Extraction, and Control.  
 
First, truth. If people come to trust and rely on the machine, and the machine cannot judge truth from fiction, then we are moving into a post-epistemic or post-truth world.   
 
According to Open AI, the company that released it, GPT-4 will give existing AI systems even greater power to “reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement." Again, this is according to the people responsible for developing and unleashing this technology onto the world. 
 
It also opens the way for scammers to seize and manipulate others for power and profit. According to OpenAI, “absent safety mitigations, GPT-4 is able to give detailed guidance on how to conduct harmful or illegal activities.”  
 
Bad actors also may attempt to weaponize these powerful algorithms to take down our democracy through mass manipulation, overwhelming policy makers with human-seeming calls to action, or by misleading voters.  
 
The ability to program chatbots to fool massive numbers of people into thinking and acting on “alternative facts”—or to perpetuate “big lies”—has implications for democracy. Attempts to manipulate and even overthrow elections are not new, but GPT-4 makes it exponentially easier to manipulate and fool far more people far more often—a scary prospect with more than 50 elections worldwide in 2024, including the U.S. presidential election.  
 
According New York State Attorney General Letitia James, the broadband industry funded six companies to submit fake comments and letters to lawmakers opposing net neutrality rules. Her investigation found an astonishing 18 million fake comments were filed with the FCC, and half a million fake letters were sent to members of Congress. This fraud was discovered in part because the scheme was executed poorly. As internet security researcher Bruce Schneier observes, tools like GPT-4 will make it trivial for bad actors to pollute our democratic process with much more convincing fake comments and letters, which will in turn be much harder to detect as fraudulent.  
 
Another area of concern lies in the fundamentally extractive nature of LLMs. These algorithms owe everything they can do to the hard work of human beings all over the world. After all, they are trained on data extracted from us. But only the super-rich profit from it. The artists, musicians, writers, poets, coders, doctors, and lawyers—to name a few—will find it increasingly hard to get paid when their work is “transformed” by AI without remuneration. 
 
GPT-4 also runs the risk of exacerbating disparities in race and socio-economic status, while deepening the digital divide between those who can access the benefits of the technology and those who cannot. Given the extraordinary positive potential of GPT-4, notably in areas such as medicine, what steps do we need to ensure that everyone benefits equally from this new technology?  
 
Finally, we need to consider ways in which GPT-4 and other LLMs can be deployed as tools of control. Black and brown people, immigrants, LGBTQ people, women, and people with low incomes already disproportionately feel the negative impact of bias in algorithms that are used by law enforcement and government agencies in deciding where to deploy police, whose liberty to take away, whom to hire, and how to provide and deny social services. Ensuring that powerful actors don’t rely on biased, false, but convincing algorithms to make decisions that affect people and families will be key if we are to ensure due process and equality under the law.  
 
Now and for the foreseeable future, it is and will be up to us—including organizations like the ACLU of Massachusetts—to ensure law and policy keep pace with developments in machine learning technologies. Technology is powerful, but so is the law. And while only a handful of technology companies control the algorithms, we the people control the government. It’s up to us and our forward-thinking, public interest-supporting lawmakers to craft and implement regulations to ensure the application of these systems does not undermine legal due process or equal rights. Our laws must prioritize the welfare of humans, animals, and our planet over the private profits of a few large companies. Ultimately, we must ensure that everyone shares in the upside of these technologies, while minimizing the potential harms.  
 
Now is the time to understand, demystify, and regulate these systems, or run the risk that they will be used to take down our democracy or worse. Doing so won’t be easy, but it’s essential work that will shape our country and the world for generations to come.  

Date

Tuesday, March 28, 2023 - 8:15am

Featured image

red dots cover a large computer screen

Show featured image

Hide banner image

Show related content

Tweet Text

[node:title]

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

Show list numbers

Pages

Subscribe to ACLU Massachusetts RSS