Every citizen has the right to the freedom of speech, however, the right is limited by our responsibility towards the safety of fellow citizens to not say or write things that may hurt their sentiments or endanger their lives. Experts find undeniable evidence of the connection between online inflammatory speech with offline acts of violence against individuals or groups on the basis of race, ethnicity, or religious and political affiliation.
Pakistan is no stranger to online hate speech and expressions of bigotry on the basis of religious, cultural, or political differences. We have witnessed multiple online campaigns that dehumanized various ideological groups thereby painting a target on their backs.
Consequently, there have been calls from well-meaning citizens for making the online spaces safer for everyone, so that hate groups cannot hijack a useful commodity such as social media to peddle conspiracy theories, rumors, or hatred to undermine democracy and human rights. The responsibility of reclaiming and securing online platforms from hate groups and bigots falls on every one of us; individuals, governments, and social media platforms.
Nearly two-thirds of the world population is on Facebook, the largest social media platform that is also used by the majority in Pakistan has remained under scrutiny from the government and rights groups for potential hate-based activities. Such activities have largely been reduced due to Facebook’s pro-active checks and stringent implementation of its community standards, developed with inputs from independent experts in fields of technology, public safety, and human rights, to curb online hate speech and propaganda on its platform.
Recently, Facebook organized a webinar with Pakistani journalists to apprise them regarding its measures to tackle hate and exclusion as well as to invite their suggestions. The discussion that followed was insightful and reassuring that the largest social media platform is taking effective measures in this regard.
Facebook has a robust artificial intelligence-based system in place that pre-emptively checks and eliminates hate speech on the basis of keywords in all working languages on the platform. The AI is assisted by thousands of content moderators to ensure human checks for nuanced and literary expressions that may otherwise appear problematic.
Facebook has more than 35000 people working on safety and security with the help of more than 15000 content moderators. Its content policy team is part of the larger safety team is based in 11 offices around the world and is comprised of subject matter experts on diverse topics such as terrorism, hate speech, and child safety.
Understandably, there cannot be a single agreed-upon definition of hate speech therefore, in Facebook’s book a direct attack against people, based on their protected characteristics is hate speech.
This calls for further elaborating direct attack and protected characteristics to understand the inner workings of Facebook’s content moderation. The social media giant considers violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation as a direct attack, which cannot be launched against protected identities such as race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease.
It is important to understand that Facebook is a global platform and has been working hard towards ensuring users have stronger control on the management of their content, along with the visibility of other content. To ensure security, privacy, and safety of users; Facebook has introduced controls like snooze, unfollow, comment control, and privacy check-up among others.
This allows for users to have greater control over what they see, and to who their content is visible. This makes it easier and safer to use Facebook.
There are active measures we can take to curb hate speech by reporting and unsubscribing to accounts, groups, channels that peddle conspiracy theories, slurs, and hateful content. Facebook also allows users to unfollow, snooze or limit accounts that post controversial content. However, most importantly, every one of us must commit to stop the spread of hateful content. We must stop and think before clicking the share button as our seemingly harmless act can have grave consequences for someone else.