You are using an older browser version. Please use a supported version for the best MSN experience.

How YouTube erased history in its battle against white supremacy

The Washington Post logo The Washington Post 13/06/2019 Elizabeth Dwoskin
People pose with mobile devices in front of a YouTube logo. (Dado Ruvic/Reuters) © Dado Ruvic/Reuters People pose with mobile devices in front of a YouTube logo. (Dado Ruvic/Reuters)

SAN FRANCISCO — Last week, YouTube launched a crackdown on white supremacists and purveyors of hoaxes. It took down thousands of videos and channels that featured Holocaust denial and promoted Nazi ideologies.

But instead of praise, the implementation of a new hate-speech policy managed to offend a wide array of would-be supporters: Some of the advocates who had been lobbying for YouTube to change its practices protested that their video clips had been wrongly caught up in the sweep. Among the videos that YouTube removed were clips of Hitler’s speeches and videos explaining the origins and dangers of white-supremacist ideas that had historical and educational value.

YouTube wields enormous power as the gatekeeper of 5 billion hours of video watched daily. Its role is part social media service, part real-time broadcaster and part archive — meaning censorship on YouTube is more likely to raise difficult questions of erasing history.

Video: Breaking the hate of white supremacy (MSNBC)

UP NEXT
UP NEXT
Far more than Facebook or Twitter, YouTube’s vast video library has made it a first destination for countless students to research their term papers. Academics and journalists use the archival footage uploaded onto the site to analyze the past. 

Because the service has a “quasi-educational role,” said Adam Neufeld, vice president for the Anti-Defamation League, it is even more important that the company be vigilant about not pushing misinformation.

But unlike a traditional library, YouTube’s algorithms are designed to recommend related content and reward “watch time,” a formula that too often has led unwitting users down a rabbit hole of conspiracies and hateful ideas. Instead of solving that problem, the company’s new policies appeared to throw the baby out with the bathwater — to take down the good with the bad in one fell swoop.

The company has “a big problem with blanket or ham-handed applications of rules,” said Heidi Beirich, director of the Intelligence Project for the anti-hate advocacy group the Southern Poverty Law Center (SPLC). One of the group’s videos had been removed in the purge. By Thursday, YouTube had reinstated some of the videos, including the SPLC’s clip, and even put up its own warning labels on some educational content. 

But the company also emphasized that it was up to the public to provide context when people are uploading sensitive content or their videos would be taken down. YouTube, which until recently adopted an anything-goes approach to user-generated content, now argues that the public may not be able to readily discern the difference between the promotion of a hateful ideology and the act of teaching about it.

a screenshot of a cell phone: Educational content, like this YouTube clip about white supremacy by the advocacy group the Southern Poverty Law Center, was mistakenly removed by YouTube during a purge of white supremacist content. The company reinstated the video with a warning label after the group appealed. (Obtained by The Washington Post ) © Obtained by The Washington Post/Obtained by The Washington Post Educational content, like this YouTube clip about white supremacy by the advocacy group the Southern Poverty Law Center, was mistakenly removed by YouTube during a purge of white supremacist content. The company reinstated the video with a warning label after the group appealed. (Obtained by The Washington Post ) YouTube’s new policy prohibits videos in which a user asserts superiority over a vulnerable group, such as women, veterans, gay people, people of color and victims of a violent crime. The policy also bans videos alleging that a well-documented violent event, such as the Holocaust or the Sandy Hook school shooting, did not take place. Previously, YouTube banned only videos in which users directly called for violence against a protected group.

The company uses a combination of human monitoring and software in its takedown efforts, and says that every video that was taken down in the sweep was subject to a human review.

"We aren’t quite where we want to be,” said Sundar Pichai, the chief executive of YouTube parent Google, in a Sunday interview with Axios on HBO, describing YouTube’s efforts to remove hate speech. “YouTube is the scale of the entire Internet. But I think we are making a lot of progress.”

“It’s a hard computer science problem,” he added. “It’s also a hard societal problem because we need better frameworks around what is hate speech, what’s not, and how do we as a company make those decisions at scale and get it right without making mistakes.”

YouTube said that with a service so large — over 1.8 billion people log in on a monthly basis — there were bound to be mistakes. He said the company was looking at ways to make content that has academic and research value available to researchers in the future. “Policy updates are always complicated, especially at the beginning as teams get up to speed,” said YouTube spokesman Farshad Shadloo. “Our policies apply to all creators equally.”

Tyumen, Russia - April 30,2019: YouTube App icon channel on iPhone XR © Getty Tyumen, Russia - April 30,2019: YouTube App icon channel on iPhone XR Educational videos that got swept up in YouTube’s takedown include clips of Hitler’s speeches uploaded by teachers who focus on World War II. A video channel run by California State University’s Center for the Study of Hate and Extremism also disappeared from the platform but was reinstated after inquiries from the Los Angeles Times. (YouTube confirmed that the video was reinstated.)

Another video that was removed came from the channel of the SPLC, which for years has lobbied Google to take a more aggressive stance against white supremacy. The video featured a journalist interviewing prominent British historian and Holocaust denier David Irving.

“The video was likely flagged as Holocaust denial propaganda, but what it is is an exploration of those views and why they are problematic,” said Beirich, who appealed the takedown. When the video was reinstated several days later, it had a warning label that said, “The following content has been identified by the YouTube community as inappropriate or offensive to some audiences.”

The video service is also following in the footsteps of Google, which has changed its search algorithm for certain terms and is now curating content it deems to be authoritative alongside search queries.

In 2015, for example, the SPLC complained to Google that top results of searches for the term “Martin Luther King” yielded hate sites and disguised white-supremacy sites. Such sites led to the radicalization of Dylann Roof, who was convicted of a hate-filled shooting of nine African Americans at a South Carolina church, and who described his Internet-inspired conversion to white supremacy in a manifesto.

Beirich said Google employees at the time told her that no changes could be made to the search terms, but shortly after results began to change. Google declined to comment on the meeting.

Gallery: Stabbings at White Supremacy rally in California (Photos)

Danielle Citron, a professor focusing on censorship and free expression at the University of Maryland Carey School of Law, said that educational efforts can be critical tools for countering hateful ideologies, particularly in an age of algorithms. The advocacy videos have a good shot of ending up in the same feeds as the hate videos because they use common terms, potentially reaching the users most vulnerable to radicalization. 

The argument for keeping up some kinds of demeaning or derogatory speech is that “if you take it down, you lose chances to combat hate as well — you lose opportunities to try to persuade,” she said.

But the ADL’s Neufeld said that such attempts to persuade people from radicalization often fall flat. Researchers have found that many people who seek out political views online and elsewhere are looking to confirm their preferences, and experiments in turning them away from their preferences can often make them dig in further because they resist being told that they are wrong.

Critics say YouTube is contributing to the radicalization driving several massacres recently, such as the shooting at two mosques in Christchurch, New Zealand. The video-broadcasting giant began discussing changes to its hate speech policy roughly a year ago as part of a systematic effort to review policies around different topics, such as violent extremism and misinformation. But conversations about the dangers of white supremacy accelerated after the Christchurch shooting, a person familiar with the discussions said.

Beirich was surprised to learn that YouTube had been working on the new policy for a year: “If it was that long, why were there these basic errors?”

MORE NEWS:

A Russian journalist’s arrest counters the image of Putin the Puppet Master (The Atlantic)

Opinion: Ignore Johnson’s bluster about Brexit. He wants a general election (The Guardian)

Why are hundreds of African asylum seekers showing up at the U.S.-Mexico border? (Newsweek)

An activist faced 20 years in prison for helping migrants. But jurors wouldn’t convict him. (The Washington Post)

AdChoices
AdChoices

More From The Washington Post

The Washington Post
The Washington Post
image beaconimage beaconimage beacon