According to the head of Facebook’s policy management, Monika Bickert, the social network receives a million user violation reports every day. All of these requests are monitored by Facebook’s trained employees.
As you’d expect, it can be tricky to decide what to remove, but there are lines that have been drawn. For example, in the interview taken by CNNMoney, she talks about prioritizing posts which are apparently prompting physical harm.
She mentioned that it was acceptable to take part in conversations that are critical of institutions and religions and “engage in robust, political conversations”. What is prohibited is showing hate towards a person or a group judging on a particular characteristic.
She also mentions that the number of violation reports that have been submitted have grown systematically, as the ability to flag comments has grown to more and more platforms. The job also gets more and more complex, as an increasing number of users come from outside the home market of United States, making the process of trimming down hateful posts even more difficult and subjective. (That number stands at 80 percent as of now.)
“When it comes to hate speech,” says Bickert, “it’s so contextual. We think it’s really important for people to be making that decision”. Until the process becomes more automated, that is likely remaining an issue. In the near future, though, automation is seemingly inevitable.
What the head of policy management didn’t tell was what percentage of posts on average are actually getting removed after getting reported on the network. The topic has been getting attention in recent times, as evident by the first ever Online Harassment Summit at the SXSW festival in Austin, Texas, where Bickert also spoke.