In the wake of terrorist organizations using Twitter to communicate their plans and devise strategies, governments have pressurized the social network to ban accounts that promote terrorism and violence.
Twitter has thus improved its verification algorithms to identify these accounts.
Starting from this year, the company has suspended about 300,000 accounts aiding terrorism and violence. 95% of those accounts were weeded out through the new and improved Twitter bots.
As it is nearly impossible to identify violence amongst tens of millions of messages exchanged on a daily basis, the task of identifying and eliminating said violence on social media is dependent on AI and other technology.
The number of Twitter users has reached 328 million, with around 68 million monthly active users from US alone.
Twitter has been following Facebook and YouTube in using automation tools (read: bots) that can easily spot troublesome content in no time. Facebook has suspected 7500 people posting troublesome videos and posts.
The company revealed that around 75 per cent of the suspended accounts this year were suspected before a single tweet was sent and 935897 accounts have been blocked since 2015.
“Our anti-spam tools are getting faster, more efficient and smarter in how we take down accounts that violate our policy,” Twitter said in a statement.
According to the social network, the government data requests continued to increase. The company provided authorities with data on about 3900 accounts from January to June.
American authorities made 2,111 requests. Twitter disclosed the information of about 77 percent of the inquiries. Japan made 1,384 requests followed by UK making 606 requests.
Other governments issued only 38 requests.