Twitter is doubling down on efforts to sanitise its platform and now, the micro-blogging platform is piloting a new feature that will let users add specific content warnings to individual photos and videos sent out in tweets, under the new CEO Parag Agrawal.
Currently, the Twitter users can add content warnings to tweets but it happens with all of their tweets and not specific ones regardless of whether the tweet has sensitive material or not.
“People use Twitter to discuss what’s happening in the world, which sometimes means sharing unsettling or sensitive content. We’re testing an option for some of you to add one-time warnings to photos and videos you Tweet out, to help those who might want the warning,” the company tweeted late on Wednesday.
Once you post the tweet with a warning, the image or video will appear blurred out, with a content warning explaining why you have flagged it.
Agrawal has already said that his top priority in the new role is to improve the company’s execution and streamline how the micro-blogging platform operates.
Twitter is also overhauling the way it handles problematic and abusive tweets reported by its users, aiming to bring a more ‘human first’ approach to improve the quality of tweets flagged by its users for misinformation, hate speech, spam and others.
The new approach, which is currently being tested with a small group in the US, will be rolled out globally next year.
“It lifts the burden from the individual to be the one who has to interpret the violation at hand. Instead it asks them what happened,” Twitter said in a statement.
This method is called ‘symptoms-first’, where Twitter first asks the person what’s going on. By refocusing on the experience of the person reporting the Tweet, Twitter hopes to improve the quality of the reports they get.