Facebook’s moderation algorithm has been called biased more than a couple of times by users whose accounts were suspended. Yet, every time the social media giant disregarded these claims completely.
However, the users were right all along. According to researchers at Facebook, if a user’s activity on Facebook or Instagram suggested that he/she was black, the account was 50 percent more likely (under the new rules launched in mid-2019) to be disabled by the moderation system than those whose activity indicated they were not. These individuals have spoken to the media on condition on anonymity.
The researchers claim that they took their findings to their superiors, expecting that the changes would be reverted. To their surprise, they were hushed and were asked not to conduct any further research into racial bias in Instagram’s automated account removal system.
Although later on, the platform deployed a slightly different moderation system, these researchers were not allowed to run any tests on them.
The employees state that Facebook management has repeatedly ignored and suppressed internal research showing racial bias in the way account owners are handled. This has been confirmed by eight other current and former employees as well. The constant neglect has made the employees believe that a small inner circle of senior executives — including Chief Executive Mark Zuckerberg, Chief Operating Officer Sheryl Sandberg, Nick Clegg, vice president of global affairs and communications, and Joel Kaplan, vice president of global public policy are making decisions that run counter to what they claim on the media.