The tumult leading up to the 2020 U.S. presidential election may be in the rearview, but the election itself continues to occupy headlines even after the outcome was decided in favor of former Vice President Joe Biden. Disinformation campaigns were a prominent part of the lead-up to both 2016 and 2020’s elections, and broader scrutiny of the technology sector means companies like Twitter and Facebook are under more pressure than ever before to combat its spread on their platforms.
Both companies took active steps to limit the spread of election misinformation – Facebook and Twitter both began to fact-check and even label misleading posts related to voting. Both companies announced changes to their respective political ad policies in January. But with looming runoffs in Georgia set to decide control of the Senate, neither company is quite able to exhale.
In this post, we explore Facebook and Twitter’s efforts at stemming the tide of election misinformation in 2020, and look at possible ways the companies can improve.
2020 U.S. Presidential Election
Concerns began to mount before Election Day that variable approaches to preventing the spread of misinformation would lead to variable results. Critics of Twitter and Facebook maintained that posts were only taken down or labeled as misleading after their messages had already disseminated. As the 2020 presidential election played out – and with results in question because of increased mail-in voting (a byproduct of the coronavirus pandemic) until days after Election Day – Facebook, Twitter, YouTube, TikTok, and more soon found themselves “battling waves of misinformation.”
Each platform used different approaches. Facebook labeled posts by President Trump after he “alleged without evidence that he was ‘up BIG’ and his political opponents were ‘trying to steal the election.’” It eventually “identified Biden as the ‘projected winner’” as news outlets called the race, and issued reminders on “some Trump posts that… the vote would take longer than usual because of COVID-19 and that election officials follow strict rules.” It also contended with the Stop the Steal group: “one of the fastest-growing groups in Facebook history” that garnered 320,000 members in less than 24 hours as it falsely claimed that Biden was attempting to manipulate the election. It eventually was removed “hours later for trying to incite violence.”
Twitter began labeling tweets with misleading information in the run-up to the election, as well as potentially deleting tweets or disabling accounts that “provide misleading information about voting, attempt to suppress or intimidate voters, provide false or misleading information about results, [and] fail to fully or accurately disclose the identity of the tweeter.” The platform (a favorite of President Trump) “labeled and obscured” multiple tweets and retweets from the president that erroneously alleged voter fraud or election misconduct. It also “suspended a group of accounts that posed as legitimate news organizations” that “spread false reports that Democrat Joe Biden had won the election before the vote tallies were in.”
YouTube had its hands full as well. It labeled a video of the sitting president “falsely claiming victory on election night,” though the label was related to “all election-related videos and search results” and not misinformation-specific. It was more lenient on two videos from One America News, “a far-right news organization, that falsely declare victory for Trump” because the platform’s rules “focus narrowly on voter suppression”; instead, it prevented One America News from advertising on the videos while allowing them to stay online.
What Happens Next
While each platform took “unprecedented” action, it was not enough to stem the tide of criticism. Stop the Steal was held up by experts as a particularly powerful example of the viral nature of misinformation: Stanford Internet Observatory disinformation researcher Renee DiResta characterized Facebook groups to the New York Times as “powerful infrastructure for organizing,” with Stop the Steal in particular becoming a hub for “posts, images and videos have been proved false,” with new posts being generated faster than human moderators could keep up.
Before the election was decided, Facebook announced further plans “to add more ‘friction’ – such as an additional click or two – before people can share posts and other content,” as well as “demot[ing] content on the News Feed if it contains election-related misinformation” and “limit[ing] the distribution of election-related Facebook Live streams.” The decisions echoed Twitter’s approach, which required additional clicks or comments in order to view or share certain tweets.
In what has become a familiar occurrence, Facebook founder Mark Zuckerberg and Twitter founder Jack Dorsey found themselves the targets of bipartisan criticism in a hearing with lawmakers “about their moderation and labeling practices.” Pet refrains from both sides – the perceived silencing of conservative voices, the supposed efficacy of labels as opposed to more heavy-handed measures – were on display again, both relating to and independent of the election. Zuckerberg and Dorsey had “promised lawmakers last month that they would aggressively guard their platforms from being manipulated by foreign governments or used to incite violence around the election results — and they followed through with high-profile steps that angered Trump and his supporters,” said the Associated Press.
The Washington Post characterized the hearing as “more disorganized and lacking in a clear focus than previous tech hearings,” with topical shifts and disregarded time limits throughout. When asked by Senator Patrick Leahy (D-VT) if their companies “had conducted a post-mortem of their response to misinformation,” both Dorsey and Zuckerberg said they had not yet but will be doing so, including “allow[ing] some academics to have access to their companies’ information to do independent analyses.”
The 2020 Presidential election has been declared the most secure in history, but the hearing illustrated a salient point: “that Washington lawmakers are ill-equipped to take on” Big Tech. With a severely polarized legislature in Washington, Congress does not appear capable of passing real, drastic, and coordinated regulations on tech companies like Facebook and Twitter. Until they are, the piecemeal approach to misinformation regulation will likely be Big Tech’s modus operandi.