Artificial Thinker
Credit: Pat Bagley, The Salt Lake Tribune, UT / CTNewsJunkie via Cagle Cartoons / ALL RIGHTS RESERVED
Barth Keck
BARTH KECK

The tech-news website CNET announced last week it would suspend its use of artificial intelligence (AI) to write news stories due to “substantial” errors found in many of the 77 AI-produced articles it had published since November.

“The disclosure comes after Futurism reported earlier this month that CNET was quietly using AI to write articles and later found errors in one of those posts,” according to CNN.

The timing of this story is intriguing, coming less than two months after the launch of ChatGPT from tech startup OpenAI, “part of a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they’ve learned from a vast database of digital books, online writings and other media.”

Although CNET used another AI tool – not ChatGPT – for its now suspended articles, so-called “chat bots” like ChatGPT are, in a word, hot right now.

“The technology has venture capitalists excited,” reports CNBC. “Funding for generative AI companies reached $1.37 billion in 2022 alone, according to Pitchbook. While ChatGPT is free to use, OpenAI recently announced a new $20/month subscription plan that gives members additional benefits such as access to ChatGPT even during peak times.”

Not to be outdone, Google plans to unveil its own ChatGPT-style tool later this week, while other tech companies, big and small, have already developed their ownLarge Language Models.”

ChatGPT has taken the world by storm so suddenly that many school systems – including New York City’s – have reflexively banned it out of fear that students will use it to plagiarize assignments.

“The decision by the largest U.S. school district to restrict the ChatGPT website on school devices and networks could have ripple effects on other schools, and teachers scrambling to figure out how to prevent cheating,” reports the Associated Press. “The creators of ChatGPT say they’re also looking for ways to detect misuse.”

“Men have become the tools of their tools.”

Henry David Thoreau, “Walden” (1854)

Some colleges, conversely, are investigating ways to incorporate ChatGPT into the curriculum rather than prohibit it. Sheetal Sood, associate dean of the College of Education at the University of Hartford, said, “I started thinking about students and how this can actually be used to support students as opposed to being thought of as something that we just cannot include.”

Clearly, the jury is still out on ChatGPT in schools. And that’s as it should be. How often does our society rush to embrace nascent technologies only to later discover problems created by those very innovations? It’s a scenario previously studied by media gurus like Marshall McLuhan and Neil Postman.

“We are currently surrounded by throngs of zealous Theuths, one-eyed prophets who see only what new technologies can do and are incapable of imagining what they will undo,” wrote Postman in his book “Technopoly” 30 years ago. “We might call such people Technophiles. They gaze on technology as a lover does on his beloved, seeing it as blemish and entertaining no apprehension for the future.”

Put another way, “We should understand the harms before we proliferate something everywhere and mitigate those risks before we put something like [ChatGPT] out there,” said Timnit Gebru, former Google research scientist and current ethical AI researcher.

RELATED

Digging Into AI and Algorithmic Bias In State Government

When artificial intelligence is making decisions for state agencies, the public should know. That’s what the Connecticut Advisory Committee to…

Keep reading

Thus, while ChatGPT could possibly serve as a constructive educational tool – a point made persuasively by my CT News Junkie colleague Jamil Ragland – it does not come without potential drawbacks.

Factual mistakes constitute a major problem, of course, as evidenced by CNET’s faux pas, which points to a greater concern: Artificial intelligence is only as good as the data upon which it’s built, and that data originates from fallible human beings.

“It’s drawing from an Internet that reflects humanity, and humanity has bias and stereotypes,” said Associate Provost Jennifer Frederick of Yale University. And while ChatGPT is trained to give you language that isn’t racist or biased, “you can get a little more sophisticated in your prompt and you can actually get it to do that.”

Indeed, some critics have already complained that ChatGPT demonstrates a liberal bias. Then again, such accusations are “anecdotal” at this point, reports tech writer Matthew Gault, something to be expected with a new technology: “This is not the first moral panic around ChatGPT, and it won’t be the last.”

The best course of action at this point, therefore, is caution. Yes, experiment with ChatGPT on your own, but use it in the classroom? I’ll take a hard pass. First, let the engineers work out the kinks and the ethicists debate the circumstances.

As Neil Postman wrote, “A new technology does not add or subtract something. It changes everything.” Best that we identify the specific changes ChatGPT might bring before we surrender to its allure.

Barth Keck is in his 32nd year as an English teacher and 18th year as an assistant football coach at Haddam-Killingworth High School where he teaches courses in journalism, media literacy, and AP English Language & Composition. Follow Barth on Twitter @keckb33 or email him here.

The views, opinions, positions, or strategies expressed by the author are theirs alone, and do not necessarily reflect the views, opinions, or positions of CTNewsJunkie.com or any of the author's other employers.