To date, social media companies and other online technology platforms have operated virtually unregulated. Many experts claim that this hands-off approach is in part due to Section 230 of the 1996 Communications Decency Act, which affirms that “online platforms are not legally responsible for what users post.”

close up photo of app icons on mobile phone

With the rise of social media as a means for political dialogue and news dissemination, however, technology companies have been rethinking positions on regulating user content on their platforms. Public and private opinion on this matter is also shifting rapidly, with a handful of major corporate advertisers recently pulling ads from Facebook in protest. But the biggest driving force towards possible regulation of social media companies and online platforms is coming directly from the Trump administration, for reasons we’ll detail below. With Section 230 in the crosshairs, the future of a major part of the internet hangs in the balance.

Section 230 and the President

Debates have long swirled about how much responsibility social media platforms should take for their users’ content and speech. Politicians have grown increasingly concerned with the influence and scale of Silicon Valley, and scrutiny is higher than ever – paid political ads in an election year are just one of the latest areas to earn a place under the public’s microscope. Traditionally, however, each company has set their own standards for regulation.

The most recent flashpoint came when Twitter utilized a fact-check warning on a pair of President Trump’s tweets “in which he claims without evidence that mail-in ballots are fraudulent.” Twitter spokesperson Trenton Kennedy cited “misleading information about the voting process, specifically mail-in ballots” in the company’s decision to label the tweets – one of several areas “including civic integrity” where the company has “drawn lines.”

World leaders’ tweets were long granted immunity from standard content rules under a ‘newsworthiness’ exemption before misinformation about the coronavirus pandemic led Twitter to change their strategy. President Trump, however, was clearly incensed with the decision to place fact-checking labels on his tweets, leading him to sign an executive order that “could limit legal protections afforded to social media platforms.” The order echoed a longstanding conservative complaint that social media companies hold an anti-conservative bias and “directs the initiation of an FCC rulemaking proceeding to review the ‘good faith’ content modifications and removal element of [Section 230 of the Communications Decency Act].”

Expert Response and Potential Consequences

Legal experts were unconvinced by the order. Kate Klonick, a law professor at St. John’s University in New York, told NPR that it “flies in the face of 25 years of judicial precedent… [that is] unlikely to have any kind of weight or authority.” FCC commissioners reflected party line divides: Democratic Commissioner Jessica Rosenworcel cited the “huge thicket of First Amendment issues that [the order] drags the agency into,” while Trump-appointed Commissioner Brendan Carr told Yahoo that the proposal “makes sense” in that an FCC investigation could “be used to harass companies, demand documents, publish reports that would be potentially embarrassing.”

While the order itself is unlikely to carry true legal weight, concerns about Section 230 are shared across the aisle, including by Joe Biden and Nancy Pelosi, but for different reasons. Democrats argue that Section 230 “has created a fertile environment for the rampant spread of online misinformation, harassment and abuse,” and that removal of its protections would force major platforms to do more to stop it.

The result is “extremely precarious straits,” says Eric Goldman, Santa Clara University law professor and High Tech Law Institute co-director to NPR. Goldman argues that while both parties don’t agree on why it should be repealed, it may mean both parties may agree to a flat repeal, which could lead to a shared undesirable outcome: “more censorship by major tech companies and [a] potentially paralyzing [effect on] other websites.”

The downstream effect could be that potential lawsuits stymie services like Wikipedia, the Internet Archive, and “all these other public goods that exist and have a public-interest component that would not exist in a world without 230,” explains Aaron Mackey, an attorney for digital civil liberties nonprofit the Electronic Frontier Foundation. The overarching fear is that removing Section 230 could creating an environment predicated on an overwhelming amount of moderation and prevention, stifling free speech in the process.

What Happens Next

In early June, the Center for Democracy and Technology filed suit against President Trump for the order. Since then, other social media platforms have taken fresh action – Reddit banned subreddit “The_Donald” alongside “roughly 2,000 other communities from across the political spectrum for violations of its policies,” most of which were no longer active. Twitch suspended President Trump’s account “for violating its policies against hateful conduct;” YouTube “barr[ed] six channels for violating its policies… includ[ing] those of two prominent white supremacists;” Facebook held firm on its stance that it will not judge the validity of content on its platform.

Short of concrete action to repeal Section 230 – or real, tangible legislation – sweeping changes appear out of the question. But the protections engendered by Section 230 have shaped the internet we know and use today, for better or worse. Changes will mean seismic shifts that reverberate far beyond political affiliation – and result in a very different version of the platforms used by billions around the world.