Censorship – Communication Platform Faces Lose of Major Advertisers Seeking Content Control
New Strong – US: Facebook Inc reports 6.30.2020.
Banning a Violent Network in the US
Today we are designating a violent US-based anti-government network as a dangerous organization and banning it from our platform. This network uses the term boogaloo but is distinct from the broader and loosely-affiliated boogaloo movement because it actively seeks to commit violence. For months, we have removed boogaloo content when there is a clear connection to violence or a credible threat to public safety, and today’s designation will mean we remove more content going forward, including Facebook Groups and Pages. This is the latest step in our commitment to ban people who proclaim a violent mission from using our platform.
As part of today’s action, we are designating a violent US-based anti-government network under our Dangerous Individuals and Organizations policy and disrupting it on our services. As a result, this violent network is banned from having a presence on our platform and we will remove content praising, supporting or representing it. This network appears to be based across various locations in the US, and the people within it engage with one another on our platform. It is actively promoting violence against civilians, law enforcement and government officials and institutions. Members of this network seek to recruit others within the broader boogaloo movement, sharing the same content online and adopting the same offline appearance as others in the movement to do so.
Facebook designates non-state actors under our Dangerous Individuals and Organizations policy after a rigorous process that takes into account both online and offline behavior. During this process, we work to identify an actor’s goals and whether they have a track record of offline violence. We know the initial elements of the boogaloo movement began as far back as 2012, and we have been closely following its developments since 2019. We understand that the term has been adopted by a range of anti-government activists who generally believe civil conflict in the US is inevitable. But activists are divided over numerous issues, including the goal of a civil conflict, racism and anti-Semitism, and whether to instigate violent conflict or be prepared to react when it occurs. We noted that some people who participated at the Gun Rights Rally that took place in Richmond, VA on January 20, 2020, wore the outfit now typical for boogaloo adherents and we have since tracked the movement’s expansion as participants engage at various protests and rallies across the country. More recently, officials have identified violent adherents to the movement as those responsible for several attacks over the past few months. These acts of real-world violence and our investigations into them are what led us to identify and designate this distinct network.
In order to make Facebook as inhospitable to this violent US-based anti-government network as possible, we conducted a strategic network disruption of their presence today removing 220 Facebook accounts, 95 Instagram accounts, 28 Pages and 106 groups that currently comprise the network. We have also removed over 400 additional groups and over 100 other Pages for violating our Dangerous Individuals and Organizations policy as they hosted similar content as the violent network we disrupted but were maintained by accounts outside of it. As part of our designation process, we will now identify where we can strengthen how we enforce our policy against this banned network and spot attempts by the violent US anti-government network to return to our platform.
Today’s designation is not the first time we’ve taken action against violence within the boogaloo movement. We have always removed boogaloo content when we identify a clear call for violence. As a result, we removed over 800 posts for violating our Violence and Incitement policy over the last two months and limited the distribution of Pages and groups referencing the movement by removing them from the recommendations we show people on Facebook.
So long as violent movements operate in the physical world, they will seek to exploit digital platforms. We are stepping up our efforts against this network and know there is still more to do. As we’ve seen following other designations, we expect to see adversarial behavior from this network including people trying to return to using our platform and adopting new terminology. We are committed to reviewing accounts, Groups, and Pages, including ones currently on Facebook, against our Dangerous Individuals and Organizations policy. We are grateful to researchers, investigators and reporters who identify the fault lines that help us focus on elements of the broad boogaloo movement that pose the greatest risk of real harm.
We will continue to study new trends, including the language and symbols this network shares online so we can take the necessary steps to keep those who proclaim a violent mission off our platform. We know that our efforts will never completely eliminate the risk from this network, or other dangerous organizations, but we will continue to remove content and accounts that break our rules so we can keep people safe.
Combating Hate and Extremism
Facebook reports 9.17.2019.
Today, we’re sharing a series of updates and shifts that improve how we combat terrorists, violent extremist groups and hate organizations on Facebook and Instagram. These changes primarily impact our Dangerous Individuals and Organizations policy, which is designed to keep people safe and prevent real-world harm from manifesting on our services. Some of the updates we’re sharing today were implemented in the last few months, while others went into effect last year but haven’t been widely discussed.
Some of these changes predate the tragic terrorist attack in Christchurch, New Zealand, but that attack, and the global response to it in the form of the Christchurch Call to Action, has strongly influenced the recent updates to our policies and their enforcement. First, the attack demonstrated the misuse of technology to spread radical expressions of hate, and highlighted where we needed to improve detection and enforcement against violent extremist content. In May, we announced restrictions on who can use Facebook Live and met with world leaders in Paris to sign the New Zealand Government’s Christchurch Call to Action. We also co-developed a nine-point industry plan in partnership with Microsoft, Twitter, Google and Amazon, which outlines the steps we’re taking to address the abuse of technology to spread terrorist content.
Improving Our Detection and Enforcement
Two years ago, we described some of the automated techniques we use to identify and remove terrorist content. Our detection techniques include content matching, which allows us to identify copies of known bad material, and machine-learning classifiers that identify and examine a wide range of factors on a post and assess whether it’s likely to violate our policies. To date, we have identified a wide range of groups as terrorist organizations based on their behavior, not their ideologies, and we do not allow them to have a presence on our services. While our intent was always to use these techniques across different dangerous organizations, we initially focused on global terrorist groups like ISIS and al-Qaeda. This has led to the removal of more than 26 million pieces of content related global terrorist groups like ISIS and al-Qaeda in the last two years, 99% of which we proactively identified and removed before anyone reported it to us.
We’ve since expanded the use of these techniques to a wider range of dangerous organizations, including both terrorist groups and hate organizations. We’ve banned more than 200 white supremacist organizations from our platform, based on our definitions of terrorist organizations and hate organizations, and we use a combination of AI and human expertise to remove content praising or supporting these organizations. The process to expand the use of these techniques started in mid-2018 and we’ll continue to improve the technology and processes over time.
We’ll need to continue to iterate on our tactics because we know bad actors will continue to change theirs, but we think these are important steps in improving our detection abilities. For example, the video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology. That’s why we’re working with government and law enforcement officials in the US and UK to obtain camera footage from their firearms training programs – providing a valuable source of data to train our systems. With this initiative, we aim to improve our detection of real-world, first-person footage of violent events and avoid incorrectly detecting other types of footage such as fictional content from movies or video games.
Updating Our Policy
While terrorism is a global issue, there is currently no globally recognized and accepted definition of terrorist organizations. So we’ve developed a definition to guide our decision-making on enforcing against these organizations. We are always looking to see where we can improve and refine our approach and we recently updated how we define terrorist organizations in consultation with counterterrorism, international humanitarian law, freedom of speech, human rights and law enforcement experts. The updated definition still focuses on the behavior, not ideology, of groups. But while our previous definition focused on acts of violence intended to achieve a political or ideological aim, our new definition more clearly delineates that attempts at violence, particularly when directed toward civilians with the the intent to coerce and intimidate, also qualify.
Giving People Resources to Leave Behind Hate
Our efforts to combat terrorism and hate don’t end with our policies. In March, we started connecting people who search for terms associated with white supremacy on Facebook Search to resources focused on helping people leave behind hate groups. When people search for these terms in the US, they are directed to Life After Hate, an organization founded by former violent extremists that provides crisis intervention, education, support groups and outreach. And now, we’re expanding this initiative to more communities.
We’re expanding this initiative to Australia and Indonesia and partnering with Moonshot CVE to measure the impact of these efforts to combat hate and extremism. Being able to measure our impact will allow us to hone best practices and identify areas where we need to improve. In Australia and Indonesia, when people search for terms associated with hate and extremism, they will be directed to EXIT Australia and ruangobrol.id respectively. These are local organizations focused on helping individuals leave the direction of violent extremism and terrorism. We plan to continue expanding this initiative and we’re consulting partners to further build this program in Australia and explore potential collaborations in New Zealand. And by using Moonshot CVE’s data-driven approach to disrupting violent extremism, we’ll be able to develop and refine how we track the progress of these efforts across the world to connect people with information and services to help them leave hate and extremism behind. We’ll continue to seek out partners in countries around the world where local experts are working to disengage vulnerable audiences from hate-based organizations.
Expanding Our Team
All of this work has been led by a multi-disciplinary group of safety and counterterrorism experts developing policies, building product innovations and reviewing content with linguistic and regional expertise to help us define, identify and remove terrorist content from Facebook and Instagram. Previously, the team was solely focused on counterterrorism — identifying a wide range of organizations including white supremacists, separatists and Islamist extremist jihadists as terrorists. Now, the team leads our efforts against all people and organizations that proclaim or are engaged in violence leading to real-world harm. And the team now consists of 350 people with expertise ranging from law enforcement and national security, to counterterrorism intelligence and academic studies in radicalization.
This new structure was informed by a range of factors, but we were particularly driven by the rise in white supremacist violence and the fact that terrorists increasingly may not be clearly tied to specific terrorist organizations before an attack occurs, as was seen in Sri Lanka and New Zealand. This team of experts is now dedicated to taking the initial progress we made in combating content related to ISIS, al-Qaeda and their affiliates, and further building out techniques to identify and combat the full breadth of violence and extremism covered under our Dangerous Organizations policy.
Remaining Committed to Transparency
We are committed to being transparent about our efforts to combat hate, which is why when we share the fourth edition of the Community Standards Enforcement Report in November, our metrics on how we’re doing at enforcing our policies against terrorist organizations will include our efforts against all terrorist organizations for the first time. To date, the data we’ve provided about our efforts to combat terrorism has addressed our efforts against Al Qaeda, ISIS and their affiliates. These updated metrics will better reflect our comprehensive efforts to combat terrorism worldwide.
We know that bad actors will continue to attempt to skirt our detection with more sophisticated efforts and we are committed to advancing our work and sharing our progress.