Social media has accumulated remarkable power. Large companies like Facebook have aggressively expanded into new territories, further fueling growth and revenue – Facebook has a staggering 2.38 billion monthly active users as of March 31, 2019.
But growth at all costs has come at a price. Facebook has taken its share of lumps from critics for numerous recent indiscretions, chief among them the Cambridge Analytica scandal in 2018. By collecting and selling data for 87 million users without their consent, Facebook invited pressure from lawmakers and the public, compounded by conclusions from American intelligence agencies that Russian operatives used fake accounts to spread propaganda on the platform during the 2016 presidential election, inviting cross-border scrutiny and accusations of disingenuity and obstruction when questioned.
With fake or inaccurate information distributed via Facebook causing violence in countries like Sri Lanka, Myanmar, and India, Facebook and other companies are facing additional pressure to curb dangerous content. Now Australia has passed legislation that promises “huge fines for social media companies and jail for their executives if they fail to rapidly remove ‘abhorrent violent material’ from their platforms” – to considerable opposition from tech giants.
Recent tragedies around the world have trained a spotlight on the power Facebook wields as a disseminator of information. The platform has found itself “weaponized”, in the words of Australian attorney general Christian Porter, with the live-streamed recent mass shootings of worshippers at two mosques in New Zealand by a white nationalist serving as the latest data point.
That tragedy spurred Australian legislators to action. In what Porter termed “likely a world first,” the country quickly passed legislation “[criminalizing] ‘abhorrent violent material.’” That material is defined as “videos [showing] terrorist attacks, murders, rape, or kidnapping.” Refusal to “expeditiously” remove offending content can incur serious penalties: “fines up to 10 percent of their annual profit, and [high-level] employees could be sentenced to up to three years in prison.” Companies are also required to disclose findings to law enforcement.
How Responsible are Companies for their Content?
To Porter, the legislation was simply an extension of the “near unanimous view” among Australian citizens that platforms must be held responsible for the content that is available on them. But critics have noted the law, which the New York Times characterizes as “written quickly and without much input from technology companies or experts,” may be problematic on multiple levels. Riddled with ambiguities, critics say it heightens the possibility of censorship of legitimate speech by overcautious companies eager to avoid consequences brought on by vague wording.
Tech giants like Google and Amazon joined Facebook in voicing their displeasure, claiming that the “‘proactive’ surveillance” required by the bill could damage Australia’s diplomatic relations “while criminalizing content reposted by users without the companies knowing about it.” Industry lobbyists were also up in arms on the behalf of their representatives. Sunita Bose, managing director of Facebook and Google-representing Digital Industry Group Inc., noted the “highly complex” nature of the problem, which necessitates “discussion with the technology industry, legal experts, the media, and civil society to get the solution right.”
The problem is indeed highly complex. Lawfare described the challenges of responding to the Christchurch live stream: first, the video evaded algorithmic detection because Facebook does not have a large library of similar content; in those instances, the next step is for the video to be reported by users, which didn’t happen until 12 minutes after the broadcast ended.
After being reported, Facebook needed to stop the content from spreading – it managed to prevent or delete 1.5 million uploads of the video in the ensuing 24 hours. The task was made more difficult by “[coordinated reuploads of] differently edited versions of the video designed to defeat detection, mainstream media were rebroadcasting it, other websites and pages seeking attention recut and rerecorded the video, and individuals were seeking it out at a higher rate prompted by reporting on the video’s existence.”
What Comes Next?
The legislation creates ample gray area and invites myriad problems. This means legal challenges are likely; questions about the reach and powers granted by the law may not be resolved without them. Critics note the bill fails to address what experts have characterized as a major problem with eliminating harmful content: “banned posts, photos and videos continue to linger online because the platforms, using both human moderators and technology, have been unable to identify and remove every iteration of illegal content.”
Complicating matters further is the timing of Australia’s upcoming elections in May. The legislation was pushed through by conservative and Labor Party officials alike, likely with public perception ahead of the elections in mind. That means that practical considerations were secondary to quickly passing broader legislation.
Problems aside, the bill does represent another step forward in a broader trend – holding technology companies accountable. Early legislation has lacked cohesion that will likely come with time and experience, but the larger movement remains. Efficacy may vary, but tech companies are no longer able to coast on good will they have engendered – for better or worse.