The biggest and most widely-used technology platforms and services are being scrutinized ahead of the 2020 US presidential election. The central issue pins truth against freedom of speech, and whether user content or advertisers should be fact-checked by platforms that host and profit from said content.

For Facebook, Twitter, and Google, that question is particularly fraught in the wake of misinformation campaigns surrounding the 2016 US presidential election. Ads make up a massive part of each company’s bottom line, and robust micro-targeting tools allow advertisers to reach audiences with remarkable specificity. The tech giants have found themselves walking a tightrope—under pressure to police the truthfulness on their platforms, while avoiding accusations of suppression of free speech. At an especially polarized time politically in the United States, and with the 2020 election kicking into gear, it is worth taking a look at each’s company’s stance on political ads, and how they got there.


Facebook recently decided not to fact-check political ads, earning criticism from the political left and fairly measured response from the right. The company announced their choice to not supervise political messaging in October 2019, and while initial internal and external pressure made it seem like tweaks were likely, Facebook instead announced in early January 2020 that they were “updating our Ad Library to increase the level of transparency it provides for people and giving them more control over the ads they see” – instead of overhauling policies.

Facebook’s stance is rooted in the idea that regulations governing political ads should not be decided by private companies. Instead, they “are arguing for regulation that would apply across the industry… frankly, we believe the sooner Facebook and other companies are subject to democratically accountable rules on this the better.”

The company is instead offering tools that allow its users to view the estimated target audience of political ads via its Ad Library feature; it is providing “better Ad Library searching and filtering;” increased control over “how an advertiser can reach them with a Custom Audience from a list;” and the option to see fewer political ads in general. Facebook also reaffirmed that all users (including political figures and campaigns) “must abide by our Community Standards, which apply to ads and include policies that, for example, ban hate speech, harmful content and content designed to intimidate voters or stop them from exercising their right to vote.”


In October 2019, Twitter CEO Jack Dorsey announced the company would ban all paid political ads worldwide from its platform starting November 22, 2019, stating that the company believes “political message reach should be earned, not bought.” The move was the culmination of a series of actions after the 2016 presidential election, including the platform “requiring advertisers to verify their identities and… [publish] a database of political ads that ran on its service.”

The announcement also raised questions about what, exactly, qualifies as a political ad. The company held discussions with organizations including the American Civil Liberties Union and the Public Affairs Council to further clarify the policy. Nick DeSarno of the Public Affairs Council characterized the decision to the New York Times as the company “trying to split the difference between limiting politicians from placing ads while allowing advocacy organizations to continue raising awareness about political topics.”

The official ad policy clarifies that political content is “content that references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome.” It prohibits “ads that contain references to political content, including appeals for votes, solicitations of financial support, and advocacy for or against any of the above-listed types of political content” while also banning “ads of any type by candidates, political parties, or elected or appointed government officials,” as well as ads from PACS or SuperPACs and 501(c)(4)s) in the United States.


Google announced in November they would make changes to their political ad policy in order to “help promote confidence in digital political advertising and trust in electoral processes worldwide.” The changes, which went into effect in the US on January 6, 2020, limit micro-targeting abilities in several ways, including no longer allowing political advertisers to target voters based on political affiliation.

Scott Spencer, the VP of Product Management for Google Ads, outlined the policy tweaks in a detailed blog post. He stressed that Google applies the same ads policies to everyone, while clarifying that “robust political dialogue is an important part of democracy, and no one can sensibly adjudicate every political claim, counterclaim, and insinuation,” meaning the company will focus on clear violations of their policy.

Google has limited audience targeting to broader demographics like age and gender, with general location (postal code level). They have also maintained “contextual targeting, such as serving ads to people reading or watching a story about, say, the economy.” They will also increase transparency by expanding in-ad disclosures and transparency reports to include US state-level candidates and officeholders, ballot measures, and ads “that mention federal or state political parties.”

Looking Ahead

Facebook, Twitter, and Google have all established different positions on this matter, underscoring the complexity of political advertising in the digital era. These companies find themselves at the colliding point of false political ads, real political ads, election interference from foreign powers, and free speech. With no clear regulation or legislation on how these matters should be treated online, private companies are currently making the rules themselves.