Discord's problem: Which Nazi group were you reporting again?
A year after promising to take action against all forms of hate, gaming-focused communication platform's follow-through is lacking
"Discord's mission is to bring people together around gaming. We're about positivity and inclusivity. Not hate. Not violence.
Today we've shut down the altright.com server and a number of accounts associated with the events in Charlottesville. We will continue to take action against white supremacy, Nazi ideology, and all forms of hate."
That was Discord's public statement when it shut down an alt right server last August, just two days after white supremacists descended on Charlottesville, Virginia to hold a Unite the Right rally that climaxed with a car plowing into a group of protesters and a woman's death.
So it was a little disappointing earlier this week when Slate reported on Discord as a safe space for white supremacists. The author of the article, April Glaser, spent an afternoon looking for hate groups through Discord search sites and found more than 20 communities centered around Nazism, white supremacy, and anti-Semitism. Glaser reported those servers to Discord and was told that it had investigated and taken action against users for violating terms of service. However, she added that a number of servers she named--including ones with overtly racist names and another encouraging people to dox a list of anti-fascists activists--were still up.
We asked Discord about that lack of action, and a representative with the company referred us to its Community Guidelines, which prohibit doxing people, threatening to share their private information, or threatening to harm people. The representative also pointed to Discord's Terms of Service, which include an agreement not to use the service to "defame, libel, ridicule, mock, stalk, threaten, harass, intimidate or abuse anyone."
Discord's moderation team received the reports and took some actions. However, just because some of the violations Glaser reported on are still up, that doesn't necessarily mean Discord found them acceptable. A person familiar with the matter told us that the list of servers Glaser contributed did not include the server ID numbers for each one, which made finding the groups in question more of a challenge.
For example, "1488" was the name of one reported server. However, it would have been difficult to pin down which server was being reported, because Discord has over 1,000 servers named 1488. [UPDATE: After the publication of this article, Glaser told us that she spoke with Discord reps at length after reporting the servers, but they never asked her for server ID numbers. Furthermore, the above-referenced person familiar with the matter told us Discord has ways to track down reports without a server ID number.]
For those unaware, 1488 is white supremacist shorthand. The 14 refers to a slogan 14 words long--"We must secure the existence of our people and a future for white children"--while the 88 is an abbreviation of "Heil Hitler" derived from H being the eighth letter of the alphabet.
Given Discord's promise "to take action against white supremacy, Nazi ideology, and all forms of hate," why not just ban the more than 1,000 servers self-identifying as both white supremacists and Nazis in their choice of name?
Given Discord's promise last year "to take action against white supremacy, Nazi ideology, and all forms of hate," why not just ban the more than 1,000 servers self-identifying as both white supremacists and Nazis in their choice of name? And if Discord was simply unaware of the meaning of 1488, it's concerning that a company with more than 20 million daily active users and a commitment to keeping that community free from hate groups wouldn't have a moderation team better versed on such matters.
The person who told us about the logistical problem noted that these servers were largely small (no more than 200 people) and often seemed to be simply places for tasteless jokes and memes moreso than organized harassment or hate group recruitment. These are common points people bring up when platforms are taken to task for allowing toxic users to run free on their services, but those defenses don't hold much water.
First, there's the "equal opportunity offender/they don't really mean it" defense. As Southern Poverty Law Center senior research analyst Keegan Hankes told Slate, online communities that pride themselves on offensive humor (with racism and the like perceived as a facet but perhaps not the focus) are fertile recruitment grounds for hate groups.
"Once they can get somebody to laugh at the Holocaust, it's much easier to work backward and get them to think that white people are being oppressed systemically by Jews and people of color, is their argument," Hankes said.
As for the "Who cares when the groups are so small" defense, that too is concerning. If the point is that a relatively insignificant amount of the users are raving racists, then what does a service really lose by banning them? If nobody would even notice they're gone, then why aren't they gone yet?
The reluctance of these platforms (and plenty of non-gaming social networks as well) to take decisive action against such groups and users suggests a few possibilities, none of them particularly encouraging. Maybe the people running these platforms don't find these views objectionable. Maybe they don't think it's such a big deal to facilitate the growth of white supremacist organizations. Maybe they think fulfilling legal obligations is the same thing as fulfilling moral obligations.
Or perhaps most distressing of all, maybe Steam and Discord have done the math on how many users they would lose if they took a real stand against hate groups and decided that's just too big a hit to take.