Facebook has struggled for years to figure out what is and isn't hate speech on its platform. On Tuesday, a "bug " revealed that Facebook might be thinking about crowd-sourcing the question.
At the bottom of each post on my News Feed on Tuesday morning was the same question: "Does this post contain hate speech?" It appeared on everything from news articles to personal updates to a picture of my cat.
The question appeared - and disappeared - on Tuesday morning, as Facebook users began commenting with amusement and alarm at the variety of posts that Facebook wanted to know about.
"This was an internal test we were working on to understand different types of speech, including speech we thought would not be hate. A bug caused it to launch publicly. It's been disabled," a Facebook spokesperson said on Tuesday.
As for why Facebook might be testing a feature like this, a little bit of context is needed.
Just last week, Facebook finally released the guidelines it uses internally to enforce its own rules on the platform. That's important, in part, because Facebook has long struggled to moderate content consistently, or account for the context of a post.
For instance: Facebook once removed Nick Ut's iconic Vietnam War photograph, claiming it violated the site's policy on nudity. The platform defended, and then reversed, the decision, after the photograph's removal outraged basically the entire country of Norway. And minority groups have long said that they believe Facebook's rules and enforcement unfairly punish those who try to call out hate speech and racism.
Here is how Facebook currently defines hate speech, in case you were curious:
"A direct attack on people based on what we call protected characteristics - race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation."
According to the newly-published guidelines, moderators should allow content calling out hateful speech that would otherwise violate their prohibition against it, "but we expect people to clearly indicate their intent" when doing so. "Where the intention is unclear, we may remove the content."
© The Washington Post 2018
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.