Ellis Crosby takes a look at the failings of social media sites to tackle hate speech online.
The recent reaction from Twitter to the death threats aimed at their own employees from the Islamic State has highlighted once again the ineffectiveness of the website’s process for dealing with online threats. The moderation of online communities is a discussion that is social-media-age-old, and even ten years after Facebook was founded and eight years after Twitters formation it still all seems a bit work-in-progress.
The most well known case surrounding this issue in recent months is the rape and death threats that were directed at feminist campaigner Caroline Criado-Perez, after she won her battle to get a woman figure on an English banknote. Although two of the people who sent the threats have now been prosecuted, the response from twitter was viewed is inadequate.
Very recently I experienced the flaws in Facebook’s process for dealing with reports myself. My grievance relates to how the site reviewed several occurrences of “hate speech” that I had reported. Here is what happened.
Occasionally a post from the Facebook page of Britain First (BF), the self-proclaimed “patriotic political party and street defence organisation”, makes it onto my timeline with a crass comment such as “can you believe these people?!”. When this happened last week with a post about the murder of Palmira Silva, I decided to take a look through the comments to see the extent to which the BF supporters were jumping on the rumour bandwagon that the alleged attacker was a muslim-convert. Not unexpectedly, the majority of the comments were either incredibly racist or islamaphobic and after reporting a couple of the worst I had to stop reading.
Examples of some of the comments I reported:
A few hours later I received a notification to say that the comments had been reviewed. To my shock, both had been marked as not violating Facebook’s community standards and therefore would not be taken down. I tweeted about the situation and fellow peers reacted with the same shock at the failure of Facebook’s review system to remove such hateful posts.
One friend advised I should look into the failings of the process further to see how bad it really was. I conducted little investigation to find out what other offensive and discriminatory comments were deemed as acceptable by Facebook by reporting 37 comments for hate speech.
Before I go into the results of the reviews, I just want to make clear what the Facebook Community Standards say about hate speech and how reports are dealt with. It states, “Facebook does not permit hate speech, but distinguishes between serious and humorous speech”. This is a necessary point of course. There clearly needs to be freedom of speech across the community, or users would simply leave.
It goes on to state, “While we encourage you to challenge ideas, institutions, events, and practices, we do not permit individuals or groups to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition.” Encouraging people to challenge things and have interesting discussions whilst prohibiting discriminatory attacks, sounds good to me.
Comments and statuses reported as hate speech make their way to designated Hate and Harassment Teams in Facebook offices around the world. Each report is looked at by an employee and it is decided whether it is in violation of the Community Standards. If it is in violation then the comment is removed and the user is warned or has their account suspended, depending on the severity of the violation and whether they have been reported before. If it is not in violation then the user is still notified, but the comment is not deleted. In theory this sounds like it would be effective, but is it?
Of the 37 comments I reported only these two were found to be in violation of the Community Standards:
These comments were terrible of course, and removing them was the right thing to do, but many more if not all of the comments I reported should have been seen to be violating the rules. Here are some of the worst comments that Facebook thought were not classed as hate speech:
I can’t think of any reasons for these comments to make it through the reviews, certainly none that are acceptable anyway. As each comment is reviewed by an employee, comments like this shouldn’t just slip through the net. Perhaps Facebook’s guidelines for deciding whether a comment is “serious” or “humorous” is unclear to the employees carrying out the reviews, or perhaps they are too lenient.
It seems that these failures in the review process are not exceptions. Many anti-discriminatory groups have discovered these flaws and resort to sharing these comments between members of the group so that incidences of hate speech receive numerous reports rather than just one. However an incident of hate speech should be removed regardless of how many people have reported it.
A comment that I have heard before which I think perfectly sums up this situation is, “if Facebook is going to compare itself to a large country, it should invest in social services and protect its ‘citizens’ like one”.
By Ellis Crosby
[Image credit: Jason Howie]