Facebook has published its first quarterly Community Standards Enforcement Report, in which it reveals that it took moderation action against almost 1.5 billion accounts and posts which violated community standards in the first three months of 2018, the Guardian reports.
In the report, Facebook says the majority of moderation action was against spam posts and fake accounts. It deleted 837 million spam posts and shut down 583 million fake accounts. It also moderated 2.5 pieces of hate speech, 1.9 million instances of terrorist propaganda, 3.4 million pieces of graphic violence and 21 million pieces of content featuring adult nudity and sexual activity.
Alex Schultz, Facebook’s VP of data analytics, said the amount of content moderated for graphic violence almost tripled quarter-to-quarter. He thinks this is due to the violence that occurred in Syria over the quarter: “Often when there’s real bad stuff in the world, lots of that stuff makes it on to Facebook.”
The company managed to increase the amount of content taken down by using new AI-based tools that don’t require individual users to flag content as suspicious. AI tools were most useful for content like fake accounts and spam, as is found 98.5 of the fake accounts that were shut down and “nearly 100 percent” of the spam.