Saturday, February 15

Researchers have already tested Google’s algorithms for political bias

Google logo seen during Google Developer Days (GDD) in Shanghai, China, September 2019.

Enlarge / Google logo seen during Google Developer Days (GDD) in Shanghai, China, September 2019. (credit: Lyu Liang | VCG | Getty Images)

In August 2018, President Donald Trump claimed that social media was "totally discriminating against Republican/Conservative voices." Not much was new about this: for years, conservatives have accused tech companies of political bias. Just last July, Senator Ted Cruz (R-Texas) asked the FTC to investigate the content moderation policies of tech companies like Google. A day after Google's vice president insisted that YouTube was apolitical, Cruz claimed that political bias on YouTube was "massive."

But the data doesn't back Cruz up—and it's been available for a while. While the actual policies and procedures for moderating content are often opaque, it is possible to look at the outcomes of moderation and determine if there's indication of bias there. And, last year, computer scientists decided to do exactly that.

Moderation

Motivated by the long-running argument in Washington DC, computer scientists at Northeastern University decided to investigate political bias in YouTube's comment moderation. The team analyzed 84,068 comments on 258 YouTube videos. At first glance, the team found that comments on right-leaning videos seemed more heavily moderated than those on left-leaning ones. But when the researchers also accounted for factors such as the prevalence of hate speech and misinformation, they found no differences between comment moderation on right- and left-leaning videos.

Read 10 remaining paragraphs | Comments

No comments:

Post a Comment