Hot topics | Coronavirus pandemic

Facebook Took Measures Against Malicious Content 22.5 Million Times In Q2 2020

Facebook Took Measures Against Malicious Content 22.5 Million Times In Q2 2020

In the second quarter of this year, the Facebook administration took concrete actions 22.5 million times, identifying malicious or potentially dangerous content in user accounts. In the first quarter, this figure was 9.6 million cases, according to a report posted on the company's Facebook website on Tuesday.

It indicates that the effectiveness of detecting entries containing malicious content increased by six percentage points in the same period, from 89% to 95%. On Instagram, this figure increased by 39 percentage points, from 45% to 84%. The company explains this achievement by "expanding automated technology" in the sections of the platforms where users communicate in English, Spanish, Arabic, and Indonesian.

According to the company, it restored deleted or temporarily blocked content if, after additional checks, it concluded that the account owners did not violate ethical rules. "We want people to know that the data we provide about malicious content is accurate, so we will conduct an independent audit of our decisions with the involvement of a third party, starting in 2021," Facebook promised.

In early March, Facebook CEO Mark Zuckerberg said that the social network removes false information about the spread of coronavirus and "conspiracy theories" and blocks ads from companies that try to use the situation for their purposes.

Our proactive detection rate for hate speech on Facebook increased 6 points from 89% to 95%. In turn, the amount of content we took action on increased from 9.6 million in Q1 to 22.5 million in Q2.

On Instagram, our proactive detection rate for hate speech increased 39 points from 45% to 84% and the amount of content we took action on increased from 808,900 in Q1 2020 to 3.3 million in Q2.

You may also like: