Facebook will no longer allow graphic images of self-harm on its platform as it tightens its policies on suicide content amid growing criticism of how social media companies moderate violent and potentially dangerous content.
The social network also said on Tuesday self-injury related content will now become harder to search on Instagram and will ensure that it does not appear as recommended in the Explore section on the photo-sharing app.
Facebook's statement comes on World Suicide Prevention Day and follows Twitter's remarks that content related to self-harm will no longer be reported as abusive in an effort to reduce the stigma around suicide.
About 8 million people die due to suicide every year, or one person every 40 seconds, according to a report by the World Health Organization.
Facebook has a team of moderators who watch for content such as live broadcasting of violent acts as well as suicides. The company works with at least five outsourcing vendors in at least eight countries on content review, a Reuters tally showed in February.
Governments globally are wrestling over how to better control content on social media platforms, often blamed for encouraging abuse, the spread of online pornography and for influencing or manipulating voters.
Last month Amazon.com told Reuters that it plans to promote helpline phone numbers to customers who query its site about suicide, after searches on its site suggested users search for nooses and other potentially harmful products.
Alphabet's Google, Facebook, and Twitter have already been issuing helpline numbers in response to user queries involving the term "suicide."
© Thomson Reuters 2019
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.