Claims of misinformation, censorship place Section 230 in crosshairs

New York

Back in 1996, when the World Wide Web was just beginning to revolutionize the ways human beings could communicate, many of those helping to build the emerging online tech industry were filled with a boundless sense of optimism.

The core of this optimism was the confidence that the internet could be a truly open space for freedom of speech. It was an ethos embodied that year by a much-circulated and somewhat sly “A Declaration of the Independence of Cyberspace” by the cyberlibertarian essayist and Grateful Dead lyricist John Perry Barlow. He declared that the legal concepts of the world of matter, “concepts of property, expression, identity,” simply did not apply to the internet, a virtually pure digital space for freedom of speech beyond the “governments of the industrial world, you weary giants of flesh and steel.”

“We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity,” Mr. Barlow wrote.

Aram Sinnreich, among the first internet industry analysts, remembers those heady days well. “Nineteen ninety-six was this moment in which the idea of the internet as a return to a kind of Eden was born in the popular consciousness,” says Dr. Sinnreich, now the chair of the communication studies division at American University’s School of Communication in Washington.

“It was posed like this stateless, identity-less, free flow of consciousness that would liberate us from – from sin, from ‘original sin,’ really – and from nationalism and from violence and from racism and sexism and from all the other isms.”

That same year, too, Congress passed the so-called 26 words that created the internet, the once-obscure Section 230 of the Communications Decency Act, which created a new legal landscape for the internet and its portals of speech and information. 

Reflecting the ethos of time, Section 230 granted emerging interactive services on the internet a general immunity from most speech-restricting civil codes in the weary world of flesh and steel. Given the virtually borderless scope of the cyberworld, policymakers believed the industry would never flourish if it was liable for every iota of libel or reckless disregard for truth its millions of potential users might post to their sites. 

Nearly 25 years later, optimism has turned into a general state of unease about the state of free speech online. And this year critics from both the left and right have been calling for those 26 words to be repealed, or at least significantly changed, as each cite the growing power of social media giants like Facebook, Google, and Twitter, whose algorithmic architectures have in many ways come to control the information people see – or don’t see.

On Tuesday, President Donald Trump threatened to veto a defense policy bill if Congress didn’t include a provision to have Section 230 “terminated,” as he and other Republicans believe social media companies, and Twitter especially, have abused their far-reaching control of information to censor conservative views. Both the Democrat-led House and the Republican-led Senate are bringing the bill up for a vote anyway, with Republican lawmakers saying the defense bill is unrelated from Section 230 and shouldn’t be held up because of a separate issue.

But President-elect Joe Biden also suggested earlier this year that Section 230 should be “revoked” since social media sites, he said, had become virtual cauldrons of misinformation “propagating falsehoods they know to be false.”

At the same time, many liberal critics say the founding laissez faire principles that shaped ideas of free speech online combined with the immunities granted internet companies helped foster “an information environment that is incredibly polluted, that’s making everyone sick, and where only the powerful really feel at liberty to speak freely,” says Mary Anne Franks, professor of law at the University of Miami School of Law. 

“So in the end, not a big win for free speech,” she says. “Everyone else is kind of where they were before, which is, yes, you can speak, but be prepared to be harassed, be prepared to be defamed, be prepared to be abused and possibly threatened. And the most powerful members of society will continue to be able to shout over you and have bigger platforms than you ever could possibly get.”

Tara Todras-Whitehill/AP/File

“We are the men of Facebook” is written on the ground as anti-government protesters gather in Tahrir Square in Cairo on Feb. 6, 2011. The staunchly pro-government Egyptian Parliament passed a bill July 16, 2018, targeting popular social media accounts that authorities accuse of publishing “fake news,” the latest move to suppress dissent and silence independent sources of news.

From the Arab Spring to troll culture

A decade after the passage of Section 230, however, the emergence of social media platforms like Facebook and Twitter in the mid-2000s fed another wave of optimism about the possibilities of cyberspace and the cause of human freedom, says Dr. Sinnreich.

“So there’s this second moment in time when basically people are being told, ‘OK, so the internet’s not going to erase sin and reset the human condition,’” he says. “But what it is going to do is provide these tools that are going to democratize cultural power.”

An expert in the history of government regulations of media, he shared some of that optimism, seeing the possibilities of new forms of human communities that could potentially flourish outside government and corporate powers. Social networks online could enable “horizontal” cultural power, leading to political and social changes that diminish the power of elites.  

At the time, the Obama administration was championing similar notions of internet freedom around the world, making it one of the paramount values of American foreign policy.

“The idea was, if we can build platforms that allow everybody to participate in the cultural process, that will lead more people to participate in the civic process,” Dr. Sinnreich says. “Which will then force autocratic and hierarchical governments to become more democratic, which will then open the doors to new markets, which will allow capitalism to flourish even more in the global arena.”

Events such as the Arab Spring in 2011 and other “revolutions” enabled by online social networks like Twitter only seemed to confirm this optimism that democracy and its bedrock principles of free speech could spread around the world.

But then it all came crashing down. The promises of the Arab Spring never materialized. Edward Snowden’s revelations uncovered massive online surveillance by the U.S. government. And then a new menace emerged: Troll culture, often clothed in anonymity, used the internet’s social networks to build online communities committed to misogyny, racism, and white supremacy. 

At the same time, the structural architecture of social media platforms enabled a massively lucrative business model rooted in the relentless and meticulous surveillance of billions of users’ online behavior, experts say. Social media companies then mine this data with algorithms that determine the limited, attention-grabbing information that flows to users’ news feeds. 

Scholars often call this business model “affective engagement,” or the monetization of human psychology – the strong emotions that tend to keep users glued to their feeds. 

“I’m kind of worried about how this has caused people to silo into their own kind of media ecosystems and echo chambers,” says Tim Weninger, professor of computer science and engineering at the University of Notre Dame in Indiana, who has studied the structural impact of social media algorithms and the corresponding proliferation of misinformation and “fake news.” 

“The challenges right now are to make people aware that their clicks and likes and uploads and retweets – all those go into Twitter’s and Facebook’s and Instagram’s and Reddit’s algorithms in order to feed back more information to keep you on the site,” he says. “The primary goal is to keep you on the site so that you will click an ad and buy something, or do those things to help them generate revenue.” 

How rage and fear power fake news

In one of his studies, Dr. Weninger and his colleagues found that 75% of users who share news stories online only read the headlines of those stories – which are often sensationalistic and evoke strong emotions. His research also found that even a single share or “like” of certain posts has an enormous impact on how often algorithms will then push those posts to others. 

“What happens is that vote, that retweet, or that ‘like’ goes into the system as a signal to, hey, someone likes this, so show this to more people,” Dr. Weninger says. “So as the algorithms take into account our votes and likes, coupled with the fact that we don’t really read the thing before we post them – those are basically the antecedents to fake news.”

To make matters worse, this kind of structural architecture lends itself to what he and other experts call “coordinated inauthentic behavior,” in which nefarious actors in places like China and Russia can game the system with posts and likes and shares for false information that pique users’ emotions and help fake news go viral. 

And the primary emotions driving affective engagement are often fear and rage – emotions that a recent body of research suggests are a significant factor in making misinformation go viral, says Linda Peek Schacht, professor of leadership and public service at Lipscomb University in Tennessee.

“We have had several generations now where that critical thinking, that media literacy, so necessary for the democratic process has not either been taught or nurtured,” says Ms. Schacht, also a longtime board member of the International Women’s Media Foundation. “If you attack science enough, if you attack the press enough, if you attack the government itself enough, you are in fact creating such distrust and rage that I would argue you’re burning down the democracy house.”

This year especially, however, as misinformation and conspiracy theories surrounding COVID-19 and voter fraud began to proliferate on their platforms, companies like Twitter and Facebook have been forced to reconsider their outsize roles in the nation’s toxic political discourse and ever-widening divides.

Civic responsibility vs. unfettered free speech

Social media giants have also had to come to grips with what might be their unavoidable civic responsibilities as they conduct the flow of information essential for any functioning democracy and their digital public spheres come to dominate public discourse. 

In fact, earlier this year, after President Trump tweeted that mail-in voting would be “substantially fraudulent” and contribute to a “Rigged Election” in November, Twitter executives made a momentous decision: The company would, for the first time, post a warning label on the words of the president of the United States, calling his claims “unsubstantiated.”

Facebook soon followed suit, though in a different way, posting a label to Mr. Trump’s similar post about mail-in voting leading “to the most CORRUPT ELECTION in our Nation’s History!” with a link to official information about voting procedures.

When Twitter was founded 14 years ago, company executives like to joke that it was “the free speech wing of the free speech party,” with few restrictions on the information their users post. But the company has been forced to adjust, given the issues that have made Republicans and Democrats question the nation’s social media giants and Section 230.

“What we saw and what the market told us was that people would not put up with abuse, harassment, and misleading information that would cause offline harm, and they would leave our service because of it,” Twitter’s CEO Jack Dorsey told a Senate panel in mid-November. “So our intention is to create clear policy, clear enforcement that enables people to feel that they can express themselves on our service and ultimately trust it.” 

There are currently at least five new bills before Congress, and government regulators are continuing to seek ways to rein in the enormous power social media companies have come to wield. 

But scholars like Dr. Weninger prefer to see the “invisible hand” of the market continue to shape these kinds of valuable services, with a minimum of new regulations.  

“I’m actually kind of happy that we’re having this kind of societal discussion right now,” he says. “Most of my job is to point out things that are broken, the things that are wrong. But overall, I’m optimistic that we’re going to figure it out.”

Source Article