Disinformation Booth
Credit: Dave Whamond, Canada, PoliticalCartoons.com / CTNewsJunkie via Cagle Cartoons / ALL RIGHTS RESERVED
Barth Keck
BARTH KECK

Connecticut Public Television’s documentary “Fake: Searching for Truth in the Age of Misinformation” aired in early 2020. We viewed that film in my Media Literacy class two weeks ago as an introduction to our unit on discerning facts from falsehoods.

The four-year-old documentary is simultaneously timely and outdated. It’s timely because of points like this one by David Rand, Associate Professor of Management Science and Brain Cognitive Sciences at MIT: “A classic effect from cognitive psychology called the ‘Illusory Truth Effect’ is the finding that just hearing a statement repeated makes it seem more plausible.”

Ah hah! The perfect explanation for how Donald Trump’s “Big Lie” about the 2020 presidential election has as many (if not more) believers now than when he first proposed it. Despite overwhelming evidence to the contrary, Trump has continually blared his election conspiracy theory for 3+ years.

“Just months after the January 6th insurrection, in March 2021, nearly three in ten Americans (29%) said the 2020 election was stolen from Donald Trump,” according to research from the Public Religion Research Institute. “PRRI’s recent American Values Survey showed a similar percentage of Americans believe the election was stolen (32%) as of September 2023. Partisan beliefs in the ‘Big Lie’ have changed little from 2021 (when 66% of Republicans, 27% of independents, and 4% of Democrats believed the 2020 election was stolen) to 2023 (63% of Republicans, 31% of independents, and 6% of Democrats).”

This, despite the 61 court cases lost by the Trump team and despite countless inquiries debunking the “Big Lie” like the one conducted by the Associated Press. Nonetheless, millions of Americans continue to believe Trump’s incessant claims. The Illusory Truth Effect at work.

Thus, “Fake: Searching for Truth in the Age of Misinformation” is still timely. And yet, certain aspects are outdated, thanks to the rapid evolution of digital information – a reality that constantly challenges responsible citizens seeking truthful information. It’s a colossal game of “Whac-a-Mole.”

Deepfake videos are just one example of how quickly technology is advancing. In the documentary, Siwei Lyu, Director of Computer Vision and the Machine Learning Lab at the University of Albany, explains how deepfakes can be detected by the way people in them do not blink regularly. Not too long thereafter, producers of these faux videos learned how to “fix” this discrepancy using artificial intelligence. Indeed, content creators have barely scratched the surface of AI’s “creative” capabilities.

What’s even more troubling is the challenge of navigating the vast sea of information on social-media, the first source of news for Generation Z and younger millennials. Sadly, these platforms share not only news; they share “fake news,” or what’s more accurately called “misinformation.” The Hamas attacks on Israel in October, for example, spawned a number of misleading and blatantly fallacious posts: “One recent TikTok video, seen by more than 300,000 users and reviewed by CNN, promoted conspiracy theories about the origins of the Hamas attack, including false claims that it was orchestrated by the media.”

Question is, how do we thwart social media’s hold over people – particularly teenagers, who have a veritable world unto themselves on platforms like TikTok and Instagram? We certainly can’t expect the platforms to do it, even as several of them have taken baby steps to address the problem. Instagram’s parent company Meta, for example, plans to send messages called “nighttime nudges” to teens who’ve been on the app for more than 10 minutes at night. In addition, teen accounts are automatically placed on the “most restrictive content control settings.”

Meta also suggested recently that a federal mandate could “centralize parental consent, controls, and user age verification for apps within a mobile device’s app store or operating system.”

U.S. Sens. Richard Blumenthal and Marsha Blackburn, sponsors of the bipartisan “Kids Online Safety Act,” aren’t buying it: “Meta’s adamant attempts to deflect responsibility for its own products are beyond the pale. The company’s proposals push the responsibility of safety onto parents without making the necessary changes to toxic black box algorithms or Big Tech’s harmful business model.”

Translation: Don’t expect any substantive solutions from the companies that run social-media platforms, especially when it comes to political information. As the election season gears up, emotionally-charged, fact-challenged posts – the very content that attracts the most clicks – will only increase. The Facebooks, Instagrams, and TikToks of the digital world aren’t likely to put a stop to those cash cows.

“Information campaigns right before elections are problematic, and have been problematic for a long time,” concludes professor David Rand of MIT. “Social media didn’t invent that, but it’s certainly possible that social media exacerbates it by making it easier for things to really spread widely.”

No doubt. The forecast calls for a social-media tsunami of misinformation, disinformation, and unadulterated deceitfulness as Election Day 2024 approaches. We’re going to need a whole lot more than raincoats and galoshes to weather this storm.


Barth Keck is in his 32nd year as an English teacher and 18th year as an assistant football coach at Haddam-Killingworth High School where he teaches courses in journalism, media literacy, and AP English Language & Composition. Follow Barth on Twitter @keckb33 or email him here.

The views, opinions, positions, or strategies expressed by the author are theirs alone, and do not necessarily reflect the views, opinions, or positions of CTNewsJunkie.com or any of the author's other employers.