EU Court of Justice Rules on Facebook Content Removal

The European Union has not shied away from addressing issues with Big Tech. Regulatory efforts undertaken in Europe have influenced other countries around the world, and EU courts have played a leading role in clarifying the extent and scope of regulations. Now, the European Court of Justice (ECJ) has ruled “that individual countries can order Facebook to take down posts, photographs and videos not only in their own countries but elsewhere” – a decision with potentially seismic impact worldwide.

The Story So Far…

The ruling resolves a 2016 case involving Austrian politician Eva Glawischnig-Piesczek. Glawischnig-Piesczek was the subject of Facebook comments describing her in what she believed were defamatory terms. An Austrian court agreed, and Glawischnig-Piesczek “demanded Facebook erase the original comments worldwide, not just within the country, as well as posts with ‘equivalent’ remarks.” Facebook, which has been consistent in the belief that the company is not responsible for content posted by any of their two billion users, refused to do so; a protracted legal battle ensued.

The ECJ’s ruling is more nuanced, however, than a blanket order to remove any and all content. The onus will be on courts to set defamatory standards, and it determines Facebook is not liable for comments “if it has no knowledge of [a comment’s] illegal nature or if it acts expeditiously to remove or disable access” to illegal comments when it becomes aware of them. The decision also “prohibits any requirement for the host provider…to monitor generally information which it stores or to seek actively facts or circumstances indicating illegal activity.”

The decision does not prevent “online platforms from being ordered by national authorities to remove illegal information or disable access to it” – a significant source of controversy. The Guardian reports that “a statement on behalf of the ECJ also declared EU member states can order internet companies to block access to ‘information [deemed unlawful] worldwide within the framework of the relevant international law, and it is up to member states to take that law into account.’” It is this particular aspect that has caused the most concern among critics.

Critics and Facebook Respond

Free speech advocates were not pleased with the decision. Thomas Hughes of Article 19, an English charity devoted to freedom of expression, told The Guardian that “compelling social media platforms like Facebook to automatically remove posts regardless of their context will infringe our right to free speech and restrict the information we see online.” Hughes also echoed a familiar piece of criticism: that decisions made by a court in one EU member state – decisions reflecting that country’s free speech standards – could also dictate what is acceptable in other countries.

Critics also expressed concern that automated technology – conceivably the most efficient way of catching and deleting offending posts – has severe limitations. Ars Technica’s analysis of the ECJ ruling included findings from Daphne Keller at Stanford’s Center for Internet and Society, author of a white paper that “[analyzed] potential outcomes” in advance of the decision. She raised concerns about the use of automated filters, which cannot understand context. “That means if text, images, or videos violate the law in one situation, filters will likely also block the same material in lawful uses like parody, journalism, or scholarship,” wrote Keller.

Facebook issued their own statement in response, a broad summary of concerns “around freedom of expression and the role that internet companies should play in monitoring, interpreting and removing speech that might be illegal in any particular country.” CEO Mark Zuckerberg has maintained that the platform’s Community Standards should govern the content shared on Facebook; he says the ruling goes much further than acceptable, with litigation from Facebook and other companies likely.

Enforcement Issues

So much of the ruling’s impact will come down to, in Zuckerberg’s words, how courts define “identical” and “equivalent” messaging. The court’s decision stated that illegality has to do with message, not word choice – “information conveying a message the content of which remains essentially unchanged” falls under that purview. For the law to not overstep its bounds, national courts “will have to set out very clear definitions on what [the words mean] in practice,” Zuckerberg told TechCrunch.

David Erdos, deputy director of the Center for Intellectual Property and Information Law at Cambridge University, told the New York Times he believes courts will be up for the task. He characterized the opinion as “narrowly crafted,” commenting that “courts will be feeling their way [through decisions] for years to come.”

The online free speech debates show no signs of abating, with ideas of what is or isn’t acceptable still being shaped by courts and companies alike. Are attempts to address defamatory speech online too little, too late (or too draconian)? Time will tell – and for now, justices are giving the final word.

Quandary Peak Research

Based in Los Angeles, Quandary Peak Research provides software litigation consulting and expert witness services. We rapidly analyze large code bases, design documents, performance and usage statistics, and other data to answer technical questions about the structure and behavior of software systems.