Researchers have developed a computer algorithm that is capable of
identifying antisocial behaviour as demonstrated in website comment
sections.
On the Internet, people who engage in antisocial ways in the comments section of web content, have come to be known as trolls.
Researchers
from Cornell University and Stanford University built their troll
finding algorithm by engaging in an analysis of typical troll behaviour
with data provided by CNN.com, Breitbart.com and IGN.com.
Researchers
compared the behaviour of those that have been banned against others
that have never been banned over an 18 month period.
They studied the comments of over 10,000 Future Banned Users (FBUs), 'Tech Xplore' reported.
Researchers
found that such users tend to concentrate their efforts in a small
number of threads, are more likely to post irrelevantly, and are more
successful at garnering responses from other users.
Studying the
evolution of these users from the moment they join a community up to
when they get banned, researchers found that not only do they write
worse than other users over time, but they also become increasingly less
tolerated by the community.
The researchers reported that it was
relatively easy to spot FBUs and to convert what they had found to
something a computer could understand.
The researchers said that they were able to spot FBUs with an 80 per cent accuracy rate after just ten posts.