Social media technology giant, Twitter, has suggested growing efforts to clamp down on 'undesirable content' on social media will inevitably impact 'freedom of expression' online.
In its latest transparency report, under a section titled ‘Legal Threats to Freedom of Expression,’ the company suggested new legislation and ongoing regulatory discussions taking place around the world about the future of public discourse online will have "a potential chilling effect with regards to freedom of expression.”
The statement went on: "According to Human Rights Watch, the wave of regulatory pressure in Europe and beyond is setting an emerging precedent and creating a “domino effect” as “governments around the world increasingly look to restrict online speech by forcing social media companies to act as their censors.”
"As regulators explore further potential restrictions, transparency is one of the most important ways we can continue to protect freedom of expression."
Between August 1, 2015 and December 31, 2017, Twitter suspended 1,210,357 accounts for violations related to the promotion of terrorism. Read more in our 12th Transparency Report👇https://t.co/e4ridwmnVI— Twitter Public Policy (@Policy) April 5, 2018
The company report also linked to Human Rights Watch criticism of recently implemented social media law in Germany which sees social media platforms threatened with fines of up to €50m for failing to promptly delete 22 different types of illegal messages.
HRW have called the law “vague, overbroad, and turns private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal.” and suggested the law has had a “domino effect,” with at least half a dozen governments, including those in Russia and the UK, now proposing the introduction of similar restrictions.
The HRW report identifies two key aspects of the law which violate Germany’s obligation to respect free speech.
"First, the law places the burden on companies that host third-party content to make difficult determinations of when user speech violates the law, under conditions that encourage suppression of arguably lawful speech.
"Even courts can find these determinations challenging, as they require a nuanced understanding of context, culture, and law.
"Second, the law fails to provide either judicial oversight or a judicial remedy should a cautious corporate decision violate a person’s right to speak or access information.
Russia Today also report today how Silicon Valley giants have also tried to stave off domestic regulatory pressure with proactive measures.
RT report that earlier this week Facebook closed 270 accounts that violated user guidelines over their connections to alleged Russian ‘troll factory’ the Internet Research Agency but did not provide any evidence to the public to explain the closures.
The RT website also report how Twitter have also been embroiled in a controversy for its “shadow-banning” of conservative voices on the 330-million strong platform despite claims in Thursday’s transparency statement that it sought “transparency” as “one of the most important ways we can continue to protect freedom of expression” and pointed users to its Lumen database, which collects justifications for various suspensions.
In the transparency report Twitter announcedit had deleted 274,460 accounts in the second half of 2017 “for violations related to the promotion of terrorism,” noting that three-quarters of the suspended users were kicked out before they posted a single tweet with help from “internal proprietary tools.”
The company went on: "Twitter is proud of our industry-leading policies regarding transparency about content restrictions. This includes our practice of uploading actioned requests to withhold content to Lumen, an independent database that collects and analyzes removal requests for content online (unless we are prohibited from doing so, e.g., if we receive a court order under seal).
"Lumen serves as a critical transparency resource as more freedom of expression comes under fire, by making such requests available for public review.
"Additionally, upon receipt of requests to withhold content, we promptly notify affected users (unless, like with our Lumen uploads, we are otherwise prohibited from doing so)."
Meanwhile other social media companies have detailed the steps they are taking to clamp down on terrorist content.
YouTube has introduced "machine learning" to help identify extremist and terror-related material, removing more than 150,000 videos between June and December.
Facebook has deployed artificial intelligence as part of its efforts, with 99% of Islamic State and al Qaida-related content removed before it is flagged by users.
- Digital Desk