Facebook founder Mark Zuckerberg has outlined a plan to let artificial intelligence (AI) software review content posted on the social network.
While describing the roadmap, Mark claimed that the Facebook algorithms would be able to spot out bullying, violence, terrorism and even those with suicidal thoughts. Mark admitted that previously when some specific content was wiped off from the social network, it was a mistake.
He also said it would take years of hard work for such algorithms to be developed, the ones that review content and approve it on Facebook.
In his letter where he discussed the future of Facebook, Mark communicated that it was not possible to review the billions and billions of posts and messages that appear on the website every day.
“The complexity of the issues we’ve seen has outstripped our existing processes for governing the community.”- Mark Zuckerberg
This social media platform has been criticized in 2014, when reports said that one of the killers of Fusilier Lee Rigby communicated online about murdering a soldier, months before the attack took place.
Citing another incident, Mark emphasized on the removal of video-graphics related to the Black Lives Matter movement. He also cited the example of the historic ‘napalm girl’ photograph from Vietnam, saying that these examples went to show some “errors” that were present in the existing process of letting AI review content.
He also claimed that facebook is monitoring the site and researching systems that can read text and look at photographs and videos in order to predict in case anything dangerous might be happening.
“This is still very early in development, but we have started to have it look at some content, and it already generates about one third of all reports to the team that reviews content. Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda.”
Mark claimed that his ultimate goal was to allow the Facebook users to post generally regarding whatever they liked or disliked, as long as the content is within the law. Later, with the help of algorithms, things could be more automated by detecting what content has been uploaded, and having it withstand scrutiny by AI. After this approval process, users will then be able to use filters in order to remove the types of post they did not want to see on their newsfeed.
“Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings, for those who don’t make a decision; the default will be whatever the majority of people in your region selected, like a referendum. It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more. At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.”
The plan was welcomed by the Family Online Safety Institute, which is a member body of Facebook’s own security advisory board.