Rants, Raves, and Rhetoric v4

Crowdsourcing Bias Identification

Zittrain’s set of tweets was interesting reading.

Lots more on it: best, 1, 2, 3.

There is an interesting plug-in called Media Bias/Fact Check which will help show how a news site skews. Before you rush off to use it, you should like I did review their methodology to ensure you can reasonably trust it.

My one issue with it is a crowdsourcing component where each site they review has a PollDaddy widget. PollDaddy like any other similar thing tries to prevent multiple voting, but that security relies on cookies, so if I wanted to skew the poll and say make Snopes appear extreme right, it is not all that difficult to vote, delete the cookie, vote, delete, etc. A possible example is MBFC marks Palmer Report as “left center” while the poll shows Extreme Left=76, Left=36, Left Center=44, Least Biased=26, Extreme Right=2. There appears to be some disagreement between those polled and the reviewers. The polls are not pulled into the plug-in database and instead used by humans to review whether they should revisit a prior determination. So MBFC cannot be directly manipulated through the polls.

Back around 2003 spam was really, really bad. Work had not yet devised an anti-spam solution, so I turned to an interesting client plug-in where users marked messages as spam. If enough users marked a message as spam, then the sender was blacklisted and anything from them sent to a junk folder. The dark side to this model? Some people interpreted “spam” as any email they did not want to receive. Several email lists or legitimate advertisers with easy and functional unsubscribe tools were blacklisted. It was easier for people to hit spam than do the right thing.

The wisdom of crowds has very narrow applications.

But a look at recent cases and new research suggests that open-innovation models succeed only when carefully designed for a particular task and when the incentives are tailored to attract the most effective collaborators.

So I appreciate that MBFC has a firewall between the crowdsourcing and their reviews. But, that also means their method is very, very labor intensive. Sites will be very slow to be added.

Facebook is talking about removing fake news. Some are calling for them to so something like MBFC and help users understand what they are reading. Back in May they removed their editors who were in charge of doing essentially the MBFC of thing of reviewing and ensuring what ended up in Trending was good material and writing summaries. These editors were accused of bias. When Facebook replaced them, fake news immediately started showing up in Trending. Removing fake news could help, but how they go about it could be interesting. Their editor debacle could push them in the algorithm route, but their algorithm debacle could push them back toward editors. Maybe some mix of the two?


Posted

in

, ,

by

Comments

One response to “Crowdsourcing Bias Identification”

  1. […] post Crowdsourcing Bias Identification appeared first on Rants, Raves, and Rhetoric […]

Leave a Reply