Twitter on Tuesday announced a new feature to allow users to flag content that could contain misinformation, a scourge that has only grown during the pandemic.
"We're testing a feature for you to report Tweets that seem misleading - as you see them," the social network said from its safety and security account.
We're testing a feature for you to report Tweets that seem misleading - as you see them. Starting today, some people in the US, South Korea, and Australia will find the option to flag a Tweet as “It's misleading” after clicking on Report Tweet.
— Twitter Safety (@TwitterSafety) August 17, 2021
Starting Tuesday, a button would be visible to some users from the United States, South Korea, and Australia to choose "it's misleading" after clicking "report tweet."
Users can then be more specific, flagging the misleading tweet as potentially containing misinformation about "health," "politics" and "other."
"We're assessing if this is an effective approach so we're starting small," the San Francisco-based company said.
"We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work."
Twitter, like Facebook and YouTube, regularly comes under fire from critics who say it does not do enough to fight the spread of misinformation.
But the platform does not have the resources of its Silicon Valley neighbors, and so often relies on experimental techniques that are less expensive than recruiting armies of moderators.
Such efforts have ramped up as Twitter toughened its misinformation rules during the COVID-19 pandemic and during the US presidential election between Donald Trump and Joe Biden.
For example, Twitter began blocking users in March who have been warned five times about spreading false information about vaccines.
And the network began flagging tweets from Trump with a banner warning of their misleading content during his 2020 re-election campaign, before the then-president was finally banned from the website for posting incitements to violence and messages discrediting the election results.
Moderators are ultimately responsible for determining which content actually violates Twitter's terms of use, but the network has said it hopes to eventually use a system that relies on both human and automated analysis to detect suspicious posts.
Concern around COVID-19 vaccine misinformation has become so rampant that in July Biden said Facebook and other platforms were responsible for "killing" people in allowing false info around the shots to spread.
He walked back the remarks the clarify that the false information itself is what could harm or even kill those who believe it.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.