Facebook Will Use AI to Moderate Comments

Facebook has come up with a new way to curb harmful behavior on the platform such as bullying, scamming, spreading spam, selling weapons and drugs, etc. The social media giant is training AI powered bots by simulating the actions of bad actors on a parallel version of Facebook with hopes that they will one day prevent such behavior.

The simulator is called WW and it runs on Facebook’s real code base.

Facebook engineer and head of the research Mark Harman told journalists that the simulation tool is quite flexible and can be used to simulate a wide range of negative behavior. The engineers created a group of bad bots that would target a group of innocent bots with bad behavior. They ran the simulation trying to find different ways to stop the bad bots and train the AI on how to deal with them.

Since it is a web based simulation that runs on a parallel version of Facebook, the actions and observations take place through real infrastructure, and so they are much more realistic.

He said:

At the moment, the main focus is training the bots to imitate things we know happen on the platform. But in theory and in practice, the bots can do things we haven’t seen before. That’s actually something we want, because we ultimately want to get ahead of the bad behavior rather than continually playing catch up.

WW is still in its research stages and none of the simulations run by the researchers have resulted in actual changes to the real Facebook so far. Though Harman thinks that the work will result in positive modifications to Facebook by the end of the year.