Industry Efforts to Censor Pro-Terrorism Online Content Pose Risks to Free Speech
- Submitted by: Love Knowledge
- Category: Media
As groups like the Islamic State have gained traction online, Internet intermediaries have come under pressure from governments and other actors, including the following:
• the Obama Administration;
• the U.S. Congress in the form of legislative proposals that would require Internet companies to report “terrorist activity” to the U.S. government;
• the European Union in the form of a “code of conduct” requiring Internet companies to take down terrorist propaganda within 24 hours of being notified, and via the EU Internet Forum;
• individual European countries such as the U.K., France and Germany that have proposed exorbitant fines for Internet companies that fail to take down pro-terrorism content; and,
• victims of terrorism who seek to hold social media companies civilly liable in U.S. courts for providing “material support” to terrorists by simply providing online platforms for global communication.
One of the coordinated industry efforts against pro-terrorism online content is the development of a shared database of “hashes of the most extreme and egregious terrorist images and videos” that the companies have removed from their services. The companies that started this effort—Facebook, Microsoft, Twitter, and Google/YouTube—explained that the idea is that by sharing “digital fingerprints” of terrorist images and videos, other companies can quickly “use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.”
As a second effort, the same companies created the Global Internet Forum to Counter Terrorism, which will help the companies “continue to make our hosted consumer services hostile to terrorists and violent extremists.” Specifically, the Forum “will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN.” The Forum will focus on technological solutions; research; and knowledge-sharing, which will include engaging with smaller technology companies, developing best practices to deal with pro-terrorism content, and promoting counter-speech against terrorism.
Internet companies are also taking individual measures to combat pro-terrorism content. Google announced several new efforts, while both Google and Facebook have committed to using artificial intelligence technology to find pro-terrorism content for removal.
Private censorship must be cautiously deployed
While Internet companies have a First Amendment right to moderate their platforms as they see fit, private censorship—or what we sometimes call shadow regulation—can be just as detrimental to users’ freedom of expression as governmental regulation of speech. As social media companies increase their moderation of online content, they must do so as cautiously as possible.
Through our project Onlinecensorship.org, we monitor private censorship and advocate for companies to be more transparent and accountable to their users. We solicit reports from users of when Internet companies have removed specific posts or other content, or whole accounts.
We consistently urge companies to follow basic guidelines to mitigate the impact on users’ free speech. Specifically, companies should have narrowly tailored, clear, fair, and transparent content policies (i.e., terms of service or “community guidelines”); they should engage in consistent and fair enforcement of those policies; and they should have robust appeals processes to minimize the impact on users’ freedom of expression.
Over the years, we’ve found that companies’ efforts to moderate online content almost always result in overbroad content takedowns or account deactivations. We, therefore, are justifiably skeptical that the latest efforts by Internet companies to combat pro-terrorism content will meet our basic guidelines.
A central problem for these global platforms is that such private censorship can be counterproductive. Users who engage in counter-speech against terrorism often find themselves on the wrong side of the rules if, for example, their post includes an image of one of more than 600 “terrorist leaders” designated by Facebook. In one instance, a journalist from the United Arab Emirates was temporarily banned from the platform for posting a photograph of Hezbollah leader Hassan Nasrallah with a LGBTQ pride flag overlaid on it—a clear case of parody counter-speech that Facebook’s content moderators failed to grasp.
A more fundamental problem is that having narrow definitions is difficult. What counts as speech that “promotes” terrorism? What even counts as “terrorism”? These U.S.-based companies may look to the State Department’s list of designated terrorist organizations as a starting point. But Internet companies will sometimes go further. Facebook, for example, deactivated the personal accounts of Palestinian journalists; it did the same thing for Chechen independence activists under the guise that they were involved in “terrorist activity.” These examples demonstrate the challenges social media companies face in fairly applying their own policies.
A recent investigative report by ProPublica revealed how Facebook’s content rules can lead to seemingly inconsistent takedowns. The authors wrote: “[T]he documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.” The report emphasized the need for companies to be more transparent about their content rules, and to have rules that are fair for all users around the world.
Artificial intelligence poses special concerns
We are concerned about the use of artificial intelligence automation to combat pro-terrorism content because of the imprecision inherent in systems that automatically block or remove content based on an algorithm. Facebook has perhaps been the most aggressive in deploying AI in the form of machine learning technology in this context. The company’s latest AI efforts include using image matching to detect previously tagged content, using natural language processing techniques to detect posts advocating for terrorism, removing terrorist clusters, removing new fake accounts created by repeat offenders, and enforcing its rules across other Facebook properties such as WhatsApp and Instagram.
This imprecision exists because it is difficult for humans and machines alike to understand the context of a post. While it’s true that computers are better at some tasks than people, understanding context in written and image-based communication is not one of those tasks. While AI algorithms can understand very simple reading comprehension problems, they still struggle with even basic tasks such as capturing meaning in children’s books. And while it’s possible that future improvements to machine learning algorithms will give AI these capabilities, we’re not there yet.
Google’s Content ID, for example, which was designed to address copyright infringement, has also blocked fair uses, news reporting, and even posts by copyright owners themselves. If automatic takedowns based on copyright are difficult to get right, how can we expect new algorithms to know the difference between a terrorist video clip that’s part of a satire and one that’s genuinely advocating violence?
Until companies can publicly demonstrate that their machine learning algorithms can accurately and reliably determine whether a post is satire, commentary, news reporting, or counter-speech, they should refrain from censoring their users by way of this AI technology.
Even if a company were to have an algorithm for detecting pro-terrorism content that was accurate, reliable, and had a minimal percentage of false positives, AI automation would still be problematic because machine learning systems are not robust to distributional change. Once machine learning algorithms are trained, they are as brittle as any other algorithm, and building and training machine learning algorithms for a complex task is an expensive, time-intensive process. Yet the world that algorithms are working in is constantly evolving and soon won’t look like the world in which the algorithms were trained.
This might happen in the context of pro-terrorism content on social media: once terrorists realize that algorithms are identifying their content, they will start to game the system by hiding their content or altering it so that the AI no longer recognizes it (by leaving out key words, say, or changing their sentence structure, or a myriad of other ways—it depends on the specific algorithm). This problem could also go the other way: a change in culture or how some group of people express themselves could cause an algorithm to start tagging their posts as pro-terrorism content, even though they’re not (for example, if people co-opted a slogan previously used by terrorists in order to de-legitimize the terrorist group).
We strongly caution companies (and governments) against assuming that technology will be the panacea in identifying pro-terrorism content, because this technology simply doesn’t yet exist.
Is taking down pro-terrorism content actually a good idea?
Apart from the free speech and artificial intelligence concerns, there is an open question of efficacy. The sociological assumption is that removing pro-terrorism content will reduce terrorist recruitment and community sympathy for those who engage in terrorism. In other words, the question is not whether terrorists are using the Internet to recruit new operatives—the question is whether taking down pro-terrorism content and accounts will meaningfully contribute to the fight against global terrorism.
Governments have not sufficiently demonstrated this to be the case. And some experts believe this absolutely not to be the case. For example, Michael German, a former FBI agent with counter-terrorism experience and current fellow at the Brennan Center for Justice, said, “Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses.” In fact, as we’ve argued before, censoring the content and accounts of determined groups could be counterproductive and actually result in pro-terrorism content being publicized more widely (a phenomenon known as the Streisand Effect).
Additionally, permitting terrorist accounts to exist and allowing pro-terrorism content to remain online, including that which is publicly available, may actually be beneficial by providing opportunities for ongoing engagement with these groups. For example, a Kenyan government official stated that shutting down an Al Shabaab Twitter account would be a bad idea: “Al Shabaab needs to be engaged positively and [T]witter is the only avenue.”
Keeping pro-terrorism content online also contributes to journalism, open source intelligence gathering, academic research, and generally the global community’s understanding of this tragic and complex social phenomenon. On intelligence gathering, the United Nations has said that “increased Internet use for terrorist purposes provides a corresponding increase in the availability of electronic data which may be compiled and analysed for counter-terrorism purposes.”
In conclusion
While we recognize that Internet companies have a right to police their own platforms, we also recognize that such private censorship is often in response to government pressure, which is often not legitimately wielded.
Governments often get private companies to do what they can’t do themselves. In the U.S., for example, pro-terrorism content falls within the protection of the First Amendment. Other countries, many of which do not have similarly robust constitutional protections, might nevertheless find it politically difficult to pass speech-restricting laws.
Ultimately, we are concerned about the serious harm that sweeping censorship regimes—even by private actors—can have on users, and society at large. Internet companies must be accountable to their users as they deploy policies that restrict content.
First, they should make their content policies narrowly tailored, clear, fair, and transparent to all—as the Guardian’s Facebook Files demonstrate, some companies have a long way to go.
Second, companies should engage in consistent and fair enforcement of those policies.
Third, companies should ensure that all users have access to a robust appeals process—content moderators are bound to make mistakes, and users must be able to seek justice when that happens.
Fourth, until artificial intelligence systems can be proven accurate, reliable and adaptable, companies should not deploy this technology to censor their users’ content.
Finally, we urge those companies that are subject to increasing governmental demands for backdoor censorship regimes to improve their annual transparency reporting to include statistics on takedown requests related to the enforcement of their content policies.
Read more https://www.eff.org/deeplinks/2017/07/industry-efforts-censor-pro-terrorism-online-content-pose-risks-free-speech
Related items
- The New COINTELPRO? Meet the Activist the FBI Labeled a “Black Identity Extremist” & Jailed 5 Months
- Between Boycotts and Special Interest Campaigns: the Chilling of Speech on Israel and Palestine
- The Right is Waging War on Academic Freedom
- FCC Set to Roll Back Digital Civil Rights with Thursday's Vote to Repeal Net Neutrality
- Internet Censorship Bills Wouldn’t Help Catch Sex Traffickers
Comments (0)