Table of Contents
In less than 24 hours on Saturday and Sunday, two mass shootings in El Paso, Texas, and Dayton, Ohio, have left at least 29 people dead. The back-to-back massacres are raising questions about the role of social media platforms in spreading content that promotes violence and white supremacist ideologies.
Facebook, Twitter, and Youtube have been slow to take action against white supremacist users and posts on their platforms, but over the past year, they finally began taking a series of actions and implementing some policy changes that target this kind of content. It doesn’t seem to be working yet.
The El Paso shooting in particular appears to have been inspired by white supremacist ideologies; federal authorities said on Sunday that they are investigating the crime as a potential hate crime and as a potential act of domestic terrorism.
Minutes before the shooting in El Paso, a hate-filled anti-immigrant manifesto that appears to be linked to the massacre appeared on popular messaging forum 8chan. Authorities are investigating whether it was written by the El Paso shooter. This is the third mass shooting linked to 8chan this year — both the Poway synagogue and Christchurch shooters posted to 8chan prior to their attacks.
8chan’s moderators deleted the manifesto that seems connected to the El Paso shooter almost immediately after it went up; according to investigative news publication Bellingcat, there were just three responses to an archived version of the post. But 8chan users have continued to repost and share links to the manifesto.
Police have identified the shooter in Dayton, Ohio, but they have not yet released information on his motivation for the shooting.
Here’s how Facebook, Twitter, YouTube, and 8chan have handled white supremacist content in the past, as well as what they had to say about their policies after this weekend’s mass shootings.
What they said this time
In a statement, a Facebook spokesperson referred Recode to its community standards, which they said “prohibit terrorist and hate groups of all kinds. This includes more than 200 white supremacist organizations globally.”
“As laid out in the policy rationale, we ban these organizations and individuals, and we also remove all praise, support and representation of them as soon as we become aware of it,” the spokesperson wrote. “Last fall, we started using proactive detection technology to identify content related to hate groups, including white supremacist organizations.”
A Twitter spokesperson said the platform is “proactively removing content that violates our policy and will be engaged with law enforcement, as appropriate.” The company also pointed to recent congressional testimony from Nick Pickles, its senior policy strategist, that referenced how under its violent extremist groups policies, it has taken action on 184 groups and permanently suspended more than 2,000 accounts.
YouTube didn’t immediately respond to a request for comment. We will update this story when it does.
Fredrick Brennan, who launched 8chan in 2013 but no longer controls the site, said in an interview with the New York Times on Sunday that it was time to shut the site down. “It’s not doing the world any good. It’s a complete negative to everybody except the users that are there,” he told the Times. “And you know what? It’s a negative to them, too. They just don’t realize it.”
Jim Watkins, an American army veteran who lives in the Philippines, now owns the site. Brennan told the Times he has tried to convince Watkins to shut down the website. 8chan did not immediately respond to a request for comment. We will update this story when it does.
What they’ve said in the past
In an op-ed in the Washington Post in March, Facebook CEO Mark Zuckerberg called for the government’s help in creating rules for the internet. “[We] have a responsibility to keep people safe on our services. That means deciding what counts as terrorist propaganda, hate speech and more. We continually review our policies with experts, but at our scale we’ll always make mistakes and decisions that people disagree with,” he wrote.
When Recode Editor at Large Kara Swisher interviewed Twitter CEO Jack Dorsey on Twitter this year, he gave himself a C grade for tech responsibility (Swisher gave him an F). While he wasn’t directly referring to white supremacist content on the platform, Dorsey said in reference to “healthy conversation” on Twitter: “We’ve made progress, but it has been scattered and not felt enough. Changing the experience hasn’t been meaningful enough. And we’ve put most of the burden on the victims of abuse (that’s a huge fail).”
At this year’s Code Conference, YouTube CEO Susan Wojcicki said, “We can definitely do, and continue to improve, how we manage the platform. I see how much improvement we’ve already made. For example, if you just look a few years ago — two years ago — there were a lot of articles, a lot of concerns about how we handle violent extremism. If you talk to people today who are experts in this field, you can see that we’ve made tremendous progress. … Last year, we committed to having over 10,000 people who are working on controversial content, so I see how much progress that we have already made.”
Their policies on extremist content
Facebook banned “praise, support and representation of white nationalism and separatism” in March after months of pressure to do so from activists. In May, it limited live-streaming after the Christchurch mosque shooting so that people who violate its policies will be temporarily blocked after one offense. That month it also banned extremist figures like Louis Farrakhan, Alex Jones, and Milo Yiannopoulos.
YouTube finally banned content from neo-nazis, Holocaust deniers, and Sandy Hook trolls in June. It updated its hate speech policy to prohibit “videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” It claims to have reduced views of supremacist videos by 80 percent since 2017.
In May, Twitter committed to “five individual actions” to address terrorist content on its platform, including updating their terms of use, creating more ways for users to flag content, investing in new tech to flag and remove terrorist content, “identifying appropriate checks on livestreaming,” and publishing transparency reports. It’s worth noting that the platform has been slow to ban white nationalists; former Ku Klux Klan leader David Duke still has an active account on Twitter.
8chan has largely avoided any content moderation and has only one rule on its platform: “Do not post, request, or link to any content that is illegal in the United States of America and do not create boards with the sole purpose of posting or spreading such content.” That essentially translates to copyright violations and child porn. (Google delisted 8chan from its search results for violating its policies, but the site is not hard to find. It also has a verified Twitter account.)
Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.
Posts from the same category:
- None Found