Twitter’s Newest Feature Lets You “Rethink a Reply”

In order to curb negative comments and bullying on the platform, Twitter is testing an AI-powered feature dubbed ‘Rethink a Reply.’ The new feature comprises of AI-based algorithms that will flag tweet containing offensive language. Users will be asked to revise the content before posting through a prompt highlighting the questionable choice of words.

Explaining the feature, Twitter said:

When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.

To train the algorithm, Twitter is using tweets that have been previously reported to the company for abusive behavior. For now, the feature is being focused on tweet replies for a small number of English-language speakers.

A similar feature was introduced by Instagram in December last year to stop bullying. The Facebook-owned social media platform also uses AI-powered algorithms to detect offensive content. Users are then given an option to revise their comments or content before posting.

Currently, Twitter is treating the feature as an experiment and has announced that it will evaluate the results before moving on to the next step.



Get Alerts

Follow ProPakistani to get latest news and updates.


ProPakistani Community

Join the groups below to get latest news and updates.



>