Facebook Says It Will No Longer Show Health Groups in Recommendations

Over the last year, Facebook took down more than 1 million groups that violated its policies on misinformation and harmful content.

Facebook Says It Will No Longer Show Health Groups in Recommendations

Facebook now also limits the spread of groups tied to violence by removing them from its recommendations

Highlights
  • Facebook is under pressure to curb such misinformation on its platform
  • Twitter laid out how it assesses groups for coordinated harmful activity
  • Twitter said this coordination could be technical
Advertisement

Facebook will no longer show health groups in its recommendations, the social media giant announced on Thursday, saying it was crucial that people get health information from "authoritative sources."

Over the last year, the company took down more than 1 million groups that violated Facebook's policies on misinformation and harmful content, it said in a blog post.

Misleading health content has racked up an estimated 3.8 billion views on Facebook over the past year, peaking during the coronavirus pandemic, advocacy group Avaaz said in a report last month.

Facebook, under pressure to curb such misinformation on its platform, has made amplifying credible health information a key element of its response. It also removes certain false claims about COVID-19 that it determines could cause imminent harm.

The world's largest social network also said it would bar administrators and moderators of groups that have been taken down for policy violations from creating any new groups for a period of time.

Facebook said in the blog post that it also now limits the spread of groups tied to violence by removing them from its recommendations and searches, and soon, by reducing their content in its news feed. Last month, it removed nearly 800 QAnon conspiracy groups for posts celebrating violence, showing intent to use weapons, or attracting followers with patterns of violent behavior.

Twitter also said in a tweet on Thursday that the platform had reduced impressions on QAnon-related tweets by more than 50 percent through its "work to deamplify content and accounts" associated with the conspiracy theory. In July, the social media company said it would stop recommending QAnon content and accounts in a crackdown it expected would affect about 150,000 accounts.

In a blog post on Thursday, Twitter laid out how it assesses groups and content for coordinated harmful activity, saying it must find evidence that individuals associated with a group or campaign are engaged in some kind of coordination that may harm others.

The company said this coordination could be technical, for example, an individual operating multiple accounts to tweet the same message, or social, such as using a messaging app to organise many people to tweet at the same time.

Twitter said it prohibits all forms of technical coordination, but for social coordination to break its rules, there must be evidence of physical or psychological harm, or 'informational' harm caused by false or misleading content.

© Thomson Reuters 2020


Is Android One holding back Nokia smartphones in India? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.

Affiliate links may be automatically generated - see our ethics statement for details.
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

OnePlus Buds App With Support for Non-OnePlus Phones Incoming
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »