I’m sure I’m not the only person reminded of Pandora’s box as I read daily about the rapid advancement of artificial intelligence (AI).
You remember Pandora from middle-school social studies: She was the first human woman in Greek mythology, created on Zeus’ order “as a punishment to be visited upon mankind.” The crime? Receiving the gift of fire from Prometheus, the cunning Titan.
“Unbeknownst to her, Pandora’s box was filled with evils bestowed by the gods and goddesses, such as strife, disease, hatred, death, madness, violence, hatred, and jealousy,” explains History Cooperative. “When Pandora was unable to contain her curiosity and opened the box, all of these evil gifts escaped, leaving the box almost empty. Hope alone remained behind, while the other gifts flew off to bring evil fortune and countless plagues to human beings.”
Such was the case, it seems, when tech company Open AI late last year released ChatGPT, “part of a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they’ve learned from a vast database of digital books, online writings, and other media.” As I wrote three months ago, some people found the hasty release of ChatGPT ill-advised.
“We should understand the harms before we proliferate something everywhere and mitigate those risks before we put something like [ChatGPT] out there,” said AI ethicist and researcher Timnit Gebru.
Unfortunately, ChatGPT has already escaped Pandora’s box and a number of competing tech companies have gleefully developed chatbots to follow. That fact is hardly surprising – the prodigious promise of AI is simply too alluring for most people to ignore. So much of our world is already running on AI, in fact, that we think there’s nothing to fear. Most students in my Media Literacy classes, for example, have become so attached to their smartphones that they couldn’t be bothered with cautionary tales about how the algorithms inside their phones are making them “tools of their tools.”
Clearly, the apparently “personal” attributes of chatbots are endearing, says AI researcher and former Google employee Meredith Whittaker, making us feel they are “human” and that “there’s someone there listening to us. It’s like when you’re a kid, and you’re telling ghost stories, something with a lot of emotional weight, and suddenly everybody is terrified and reacting to it. And it becomes hard to disbelieve.”
That’s exactly what happened to New York Times technology columnist Kevin Roose when he engaged in an intriguing and downright creepy conversation with “Sydney,” Microsoft’s new chatbot.
“Over more than two hours, Sydney and I talked about its secret desire to be human, its rules and limitations, and its thoughts about its creators,” wrote Roose. “Then, out of nowhere, Sydney declared that it loved me — and wouldn’t stop, even after I tried to change the subject.”
No wonder some tech experts are worried that AI might soon spin out of our control and wreak absolute havoc. One of those critics is another former Google employee, Geoffrey Hinton. The so-called “godfather of AI” is fearful that “future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually run that code on their own.”
Meredith Whittaker thinks Hinton is missing a more immediate problem, namely, how AI discriminates against marginalized populations, notably “Black people, women, disabled people, [and] precarious workers.”
To that point, the Connecticut Senate approved legislation last week “to scrutinize the use of algorithms and artificial intelligence by Connecticut’s state government to ensure automated systems are not permitted to make discriminatory decisions,” explained CT News Junkie’s Hugh McQuaid. “The bill, which will now head to the House for consideration, requires the Administrative Services Department to publish a list of agencies using AI and an ongoing assessment of how that technology is used by state government.”
Artificial intelligence, in plain terms, is only as good as the data it uses to make decisions. Even unintentional discrimination can create biased decisions that assign students to certain schools or that determine whether a child has suffered a “life-threatening episode.”
The use of AI can lead to many other problems, of course, including inaccurate news articles, misinformation and conspiracy theories, and politically motivated disinformation. The question is, do we really want to allow a few powerful tech companies to manage these problems? Even more concerning, might it be too late? Maybe. Maybe not. As you recall, when Pandora opened the box, the only object that remained was hope. Here’s hoping that human beings come to their senses and think before allowing more artificial intelligence to leave the box.