Microsoft Corp. President Brad Smith has called for a broad effort by the technology sector to do more to halt posting and accessing of violent materials like the videos of killings from the recent terrorist attacks in New Zealand.
“Words alone are not enough. Across the tech sector, we need to do more. Especially for those of us who operate social networks or digital communications tools or platforms that were used to amplify the violence, it’s clear that we need to learn from and take new action based on what happened in Christchurch,” he wrote in the company blog on Sunday as he and others from his company began a visit to New Zealand.
He said the tech sector bears responsibility to act, even in places such as the U.S. where laws generally do not hold platforms liable for the content they publish. Courts have turned aside lawsuits against Facebook and Twitter that allege they had some responsibility for terrorist attacks.
“As [New Zealand] Prime Minister Jacinda Ardern noted last week, gone are the days when tech companies can think of their platforms akin to a postal service without regard to the responsibilities embraced by other content publishers. Even if the law in some countries gives digital platforms an exemption from decency requirements, the public rightly expects tech companies to apply a higher standard,” he stated.
He said the technology cannot solve this problem by itself but that it needs to consider what additional controls people working at tech companies should apply when it comes to the posting of violent material.
“There are legal responsibilities that need to be discussed as well. It’s a complicated topic with important sensitivities in some parts of the tech sector. But it’s an issue whose importance can no longer be avoided,” Smith wrote.
He said that the responsibility to come up with solutions is shared by all technology firms and not just those “on the hot seat” at a time of controversy.
“The question is not just what technology did to exacerbate this problem, but what technology and tech companies can do to help solve it. Put in these terms, there is room – and a need – for everyone to help,” he stated.
“This is the type of serious challenge that requires broad discussion and collaboration with people in governments and across civil society around the world,” he added.
The Microsoft executive and legal counsel suggested there are three areas where efforts should be focused:
The first is prevention, that is stopping “perpetrators from posting and sharing acts of violence against innocent people.” He suggested that new technologies including AI could help identify violent content while browser-based solutions like safe search might block the accessing of such content.
Second, he said the industry needs to respond more effectively to moments of crisis. The tech sector should consider creating a “major event” protocol, in which technology companies would work from a joint virtual command center during a major incident.
Third, the industry should work to foster a healthier online environment and digital civility. “There are too many days when online commentary brings out the worst in people,” he noted. “While there’s obviously a big leap from hateful speech to an armed attack, it doesn’t help when online interaction normalizes in cyberspace standards of behavior that almost all of us would consider unacceptable in the real world.”
Tech firms have taken some steps to limit violent content. YouTube, Facebook, Twitter and Microsoft in 2016 started sharing a database of digital fingerprints assigned to militant content to help each other identify the same content on their platforms.
In 2017, Facebook said it was using image matching and language understanding to identify and remove content quickly.
Facebook uses artificial intelligence for image matching to see if a photo or video matches any from groups it has defined as terrorist, and it analyzes text that has removed for supporting militant organizations to help better identify such propaganda.
- Facebook Opens Up About How It Is Removing Terrorist Content
- Court Absolves Twitter of Liability for Terrorists’ Tweets
- Twitter, Google, Facebook Sued by Orlando Shooting Victims’ Families
- Facebook, Twitter Face Lawmakers Over Liability, Privacy, Regulation
- Lawmakers Eye Holding Social Media Firms Liable for User Posts
- How to Make Facebook Liable for Its Content: Viewpoint
- Judge Dismisses Lawsuits Claiming Facebook Liable for Supporting Terrorists
Was this article valuable?
Here are more articles you may enjoy.