Facebook has started to assign a reputation score to its users and predict their trustworthiness on a scale from zero to 1, according to The Washington Post.
Facebook has been developing the ratings system over the past year and it aims to measure the credibility of users to help identify malicious actors, as part of the company’s efforts against fake news, according to product manager Tessa Lyons.
As the social network relied on users to report problematic content, it discovered that some users began to falsely report posts as fake, which meant Facebook needed to develop new tools to fight the phenomenon.
It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said.
Lyons says the trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility and it’s not the only way Facebook measures risky behaviour. In fact, it is one of thousands of new behavioural clues that the network takes into account as it tries to understand risk. It is unclear what other criteria Facebook uses to determine a user’s score, whether all users have a score or how these scores are used.
“One of the signals we use is how people interact with articles,” Lyons later explained. “For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”