We've all heard anecdotes about trolling on Wikipedia and other social platforms, but rarely has anyone been able to quantify levels and origins of online abuse. That's about to change. Researchers with Alphabet tech incubator Jigsaw worked with Wikimedia Foundation to analyze 100,000 comments left on English-language Wikipedia. They found predictable patterns behind who will launch personal attacks, and when.
The goal of the research team was to lay the groundwork for an automated system to "reduce toxic discussions" on Wikipedia. The team's work could one day lead to the creation of a warning system for moderators. The researchers caution that this system would require more research to implement, but they have released a paper with some fascinating early findings.
No comments:
Post a Comment