Big technology platforms are calling on the European Union to protect them from legal liabilities for removing hate speech and illegal content as government scrutiny over how platforms manage user posts grows worldwide.
A safeguard protecting companies that actively manage user posts would result in “better quality content moderation,” by incentivizing platforms to remove bad content while protecting free expression, Edima, an association representing Facebook Inc., ByteDance Ltd.-owned TikTok, Alphabet Inc.’s Google and others, said in a paper Monday.
Current EU rules protect platforms from liability for what’s posted on their sites, unless they have “actual knowledge” of its presence — for instance, if a user flags it as harmful. Once platforms are made aware of illegal content, they’re obliged to act fast to remove it.
Tech firms fear that by removing content voluntarily, such as with algorithms or other systems to detect infringements, they could be deemed to have actual knowledge and be liable for hosting the bad posts. That’s becoming more of a concern as the European Commission, the bloc’s executive body, prepares to overhaul the longstanding rules to give platforms greater responsibility for the content spread on their sites for everything from hate speech and terrorist propaganda to unsafe toys.
“All of our members take their responsibility very seriously and want to do more to tackle illegal content and activity online,” said Siada El Ramly, director general of Edima. “A European legal safeguard for service providers would give them the leeway to use their resources and technology in creative ways in order to do so.”
Europe isn’t alone in increasing scrutiny of tech firms’ legal protections. A U.S. Senate panel has called the chief executive officers of Facebook and Twitter Inc. to testify about their content policies next month. President Donald Trump and other conservatives claim current liability laws for tech platforms enable the companies to silence their views.
In recent years, platforms have also come under intense scrutiny for failing to do enough to monitor activity such as hate speech that’s blamed for inciting violence in places like Myanmar, or for letting Russians spread disinformation to influence the 2016 U.S. presidential election and the UK’s Brexit vote.
Still, tech companies have been wary of shouldering too much legal responsibility for posts, which they say could harm freedom of speech by incentivizing firms to block more content than is necessary to avoid sanctions.
Edima said it’s sending its proposed amendments, which say providers should still be held accountable for inaction if they receive a substantiated notification of a specific illegality, to officials in the European Commission, Parliament and Council.
The EU doesn’t plan to remove the liability protections altogether, but could hit companies with fines if they fail to do enough. The commission is also considering a provision that clarifies that measures to actively search for problematic content doesn’t take tech companies “outside the scope of the liability exemptions,” according to a draft of upcoming policy obtained by Bloomberg. A representative for the commission declined to comment on the draft.
As part of the regulatory overhaul, platforms could also face obligations to maintain a notification system for users to flag illegal content, to report regularly on content removal rates and to collect identification information from business users.
“Very large platforms” may face additional requirements, including providing more transparency around their content moderation, amplification of certain content and online advertising services. In addition, the EU is planning to set up a new board in charge of supporting national authorities to monitor compliance, according to the draft.
The EU proposals, which will also include a new regulation to curb the power of large platforms, are due to be unveiled in early December, but could still be delayed. Once proposed, the Commission, Parliament and Council will need to agree to a final version of the text before it becomes law.
Photograph: Platforms like Facebook and YouTube have come under intense scrutiny for failing to do enough to police hate speech. Photo credit: Uli Deck/Getty Images/picture alliance
Was this article valuable?
Here are more articles you may enjoy.