Lawmakers in Australia passed new legislation on Thursday to hold social media companies accountable for the spread of hate content on their platforms. With its implementation, companies such as Facebook and YouTube could be subject to huge fines, and their executives threatened with jail time, if they do not ensure the “expeditious” removal of inappropriate material.
The Sharing of Abhorrent Violent Material bill was enacted following criticism of social media for enabling the live broadcast of last month’s Christchurch, New Zealand mosque massacres, allegedly carried out by an Australian man, that left 50 dead. Platforms struggled to remove the content even after the initial broadcast was taken down. Last week, Facebook CEO Mark Zuckerberg called for governments and regulators to play a more active role in suppressing harmful content.
But the Australian legislation has been criticized for being too rushed and drawn without necessary consultation with tech companies and other stakeholders. There is also concern, just as there has been with other countries stepping up efforts to limit the exposure of sensitive content online, that it could censor legitimate speech.
It has been reported that lawmakers in the U.K. are also considering introducing legislation that would aim to hold social media companies accountable for content carried on their platforms. According to the Guardian, the U.K. government is expected to publish plans on Monday to legislate for a duty of care by social media companies, which would be enforced by an independent regulator.
What does the new bill stipulate?
According to the legislation, forms of media depicting terrorism, murder, attempted murder, torture, rape and kidnapping, whether set within or outside Australia, is considered “abhorrent violent conduct” and must be removed from social media platforms. Failure to do so “expeditiously” — an exact time frame is not specified — could lead to companies having to pay a hefty fine of up to 10% of their annual profit, and employees imprisoned for up to three years.
The bill seeks to ensure that online platforms “cannot be exploited and weaponized by perpetrators of violence,” according to a memorandum published on the Australian parliament website.
What are Australian politicians saying?
Christian Porter, a member of the Liberal Party, which governs the country as part of a coalition, said the bill “represents an important step” in ensuring that perpetrators do not use online platforms “for the purposes of spreading their violent and extreme fanatical propaganda.”
But many, including independent member of parliament Kerryn Phelps, criticized the rushed timeline with which the bill was pushed through. Calling the bill a “knee-jerk reaction,” Phelps said it could have “myriad unintended consequences” such as discouraging internet platforms from conducting activities in Australia to avoid being exposed to risks.
Another consequence, Phelps said, is that whistleblowers “may no longer be able to deploy social media to shine a light on atrocities committed around the world” as social media companies would remove them for fear of being charged with a crime.
Who is opposing the bill?
There has been criticism from both groups representing tech companies and human rights experts about the seemingly haphazardness of the legislative process.
“This law, which was conceived and passed in five days without any meaningful consultation, does nothing to address hate speech,” Sunita Bose, the Managing Director of the Digital Industry Group Inc., which represents Google, Facebook and other tech giants in Australia, said according to New York Times.
“With the vast volumes of content uploaded to the internet every second,” Bose said, “this is a highly complex problem that requires discussion with the technology industry, legal experts, the media and civil society to get the solution right — that didn’t happen this week.”
Special representatives of the U.N. Human Rights Council called for the withdrawal of the bill and additional time for consultation. In a letter, two members of the council criticized its ambiguities: “The obligations to “expeditiously” remove content and report it to law enforcement within a “reasonable” time raise questions about how quickly service providers are expected to flag and identify offending content.”
The council added that the bill may “tip the scales in favor of disproportionate restrictions on freedom of expression” and therefore “undermine rather than protect the public interest.”