TAC’s David Dinielli and law student Eleanor Runde — who worked on an amicus brief in an upcoming U.S. Supreme Court case about tech platforms’ legal liabilities — at WNHH FM. (Paul Bass / NHI photo) Credit: Paul Bass / NHI photo

Consumer warning: If you want to publish a comment at the end of this story calling people names or lying about them committing horrible acts, tough luck. Your contributions don’t immediately get posted. They get reviewed and vetted according to rules of civility (not to mention libel law).

If, however, you have a terrorist video seeking to recruit people to blow up enemies whose religion or nationality you despise, or a lie-filled screed about someone you read about in the news, you can instantly publish it on YouTube. YouTube’s recommendation algorithm might even help you reach hateful loners all over the globe to take action of their own. And if some … unfortunate events follow, oh well. YouTube can continue doing that with more videos — as long as its parent company convinces U.S. Supreme Court justices to maintain its protection under a law passed nine years before the social-media video powerhouse was created.

That is among the stakes in Gonzalez v. Google, one of two cases the court is hearing next month concerning how legally responsible global social-media platforms should be for the dangerous or harmful content they publish and profit from.

The parents of a young woman killed by an ISIS bomber in a Paris cafe filed the Gonzalez lawsuit. They argue that Google-owned YouTube published and then amplified the distribution of an ISIS recruitment video that lured people to join the organization and take part in the deadly attack, in violation of U.S. anti-terrorism law.

Google counters that it has immunity in this case under Section 230 of the bipartisan 1996 Communications Decency Act, which protects online platforms from legal liability for postings generated by the public.

Enter a group of law students who have been devoting long hours to exploring how to ​“reduce harm by tech companies and digital platforms while also respecting everyone’s rights,” including users and producers. 

The students belong to Yale Law School’s Tech Accountability & Competition Project (TAC). Under the supervision of an experienced attorney in this field named David Dinielli (read about his background here), the students in the clinic wrote an amicus brief submitted to the Court this past Thursday as part of the case. The justices are scheduled to hear the case on Feb. 21; the TAC crew hopes to travel to D.C. to observe the arguments in person.

The students wrote the amicus brief on behalf of Section 230’s authors, Republican former U.S. Rep. Chris Cox of California and Democratic U.S. Sen. Ron Wyden.

Gonzalez is part of a broader societal reckoning over Section 230 — and the special protections Google, YouTube, Facebook, Twitter et al. have under the law to publish and profit from the promotion of hate and violence and libelous personal attacks — contained in several federal cases and pending Congressional bills. 

“What should be the responsibility of internet platforms? Should it be different than those of, for example, the New Haven Independent? And why? What are the limits of that?” Dinielli characterized the broader question, during an interview on WNHH FM’s ​“Dateline New Haven” program.

For the purposes of this amicus brief, Dinielli and his students did not focus on that broader issue. That wasn’t the assignment. (Read the brief here.

✎ EditSign)

The students focused on a narrow issue: Whether Google/YouTube’s actions in this case are covered under the language of Section 230.

They concluded that Google is indeed covered under the section’s two-prong test, according to third-year law student Eleanor Runde, one of the TAC members who collaborated on the amicus brief.

Prong one: Was the video in question indeed user-generated, rather than a YouTube-generated video? ​“They didn’t alter the content. They didn’t go in and change the video or edit the text and make it say something that it didn’t before in a way that made it more illegal or made it illegal whereas before it was legal,” Runde observed. Section 230 relies on that criteria.

Prong two: Section 230 protects ​“publishers.” ​“In this case it was pretty clear that they were publishing videos,” Runde noted.

A counterargument in this case is that by recommending the ISIS video, YouTube in effect created new information or content; and that the amplification of the video and targeting of people who might be vulnerable to the content goes beyond merely ​“publishing” the video.

What about the bigger question?

The internet has changed dramatically since 1996. Congress approved Section 230 eight years before the creation of Facebook, nine years before YouTube, 10 years before Twitter. Do Facebook and Google and Twitter deserve special legal protection — protection not afforded (thankfully) to much smaller publishers of news and information and opinions — to generate millions of dollars of revenue by enabling millions of people to instantly weigh in and publish their videos and opinions and ​“facts”?

Is society gaining anything from Section 230?

Or has its time passed, and is it now a menace to society with no redeeming value, a shield for unscrupulous billionaire media titans?

“There are trade-offs. There are historical situations where people’s ability to communicate about real-time events — [like the] Arab spring … We are probably happy that people were able to communicate with each other without” moderators and gatekeepers deciding what could be published, Dinielli argued.

“I think the enrichment of our discourse by allowing user-generated content in real time is considerable,” argued Runde.

The writer of this article is an absolutist on this question, convinced Congress should repeal Section 230; hence the bias coursing through the lines of this story. He (aka ​“I”) does not see the non-stop flood of instantaneous unmoderated comments promoting civic discussion or protecting democracy; rather it endangers democracy, while enabling the new corporate mass-media titans like Mark Zuckerberg and Elon Musk to generate billions of dollars in revenues by avoiding the essential libel limits placed on much smaller publishers (i.e. newspapers, websites, TV stations). 

They should have to follow the same laws, should be sued for publishing false or harmful content, even if that means hiring lots of people to review comments and user-generated posts before publishing them. The New York Times pays the money to do that.

If that means slowing the gusher of instant postings — if we need to wait five extra minutes or an hour to hear what hundreds, not thousands, of people think about Joe Biden’s or Donald Trump’s latest controversies or rumors of ethnic violence in war-torn lands or the Kardashians’ latest selfies … democracy won’t suffer. 

Civil discourse won’t suffer. It will benefit. 

And government won’t be ​“censoring” the platforms or imposing ideological dictates; simple libel law will bind the blowhards. 

The Edenic ​“commons” imagined back in 1996 has developed into a dystopia broken into brutal private profit-driven press fiefdoms in league with terrorists and Alex Jones-style trolls and doxxers and fabulists who drown out the powerless and power virality at the cost of truth, decency, public safety, coherent thought, or true civic discourse. 

The business model is based on a scale that makes effective moderation impossible. 

Section 230 is fentanyl enabling these new press barons to avoid accountability or checks on their power and profit. Blow it up.

YouTube video

Runde and Dinielli, who have studied the subject in far more depth, offered a more nuanced, level-headed take on that question in the ​“Dateline” discussion. Click on the above video to watch the full conversation.

Click here to subscribe to ​“Dateline New Haven” and here to subscribe to other WNHH FM podcasts.