Like fake accounts, Facebook will also shut down real accounts involved in trolling: Report

The company told Reuters that Facebook is taking a more aggressive approach to shutting down coordinated groups of real-user accounts that engage in some harmful activities on its platform, using the same strategy its security teams use for campaigns that use fake accounts. do against. The new approach, first reported here, uses a strategy commonly taken by FacebookThe security teams for the network’s wholesale shutdowns have engaged in influence operations that use false accounts, such as Russian troll farms, to manipulate public debate.

This could have major implications for how the social media giant handles political and other coordinated movements that break its rules, at a time when Facebook’s approach to abuses on its platform has been heavily criticized by global lawmakers and civil society groups. is under investigation. Facebook said it now plans to take this same network-level approach with clusters of coordinated real accounts that systematically break its rules through mass reporting, where multiple users can access the target’s content or content. Falsely report account closures, or brigading, a type of online harassment where users can coordinate targeting an individual through mass posts or comments.

In a related change, Facebook said on Thursday it would take a similar approach to campaigns from real users that cause “coordinated social harm” on its platforms, as it announced the removal of the anti-German.covid Ban Querdenken Movement. These expansions, which a spokesperson said in its early stages, mean Facebook’s security team can identify the main movements driving such behavior and are more widespread than the company’s removal of posts or personal accounts. can take action as it might otherwise.

In April, BuzzFeed News published a leaked Facebook internal report about the company’s role in the January 6 riots on the US Capitol and its challenges in stopping the rapidly growing ‘Stop the Steel’ movement, where one of the findings was Facebook. “A policy around co-ordinated normative harm.”

Facebook’s security experts, who are separate from the company’s content moderators and handle threats from adversaries trying to evade its rules, began cracking down on influence operations using fake accounts in 2017 after the 2016 US election. Following in which US intelligence officials concluded that Russia had used social media platforms as part of a cyber-influence campaign – a claim Moscow has denied.

Facebook dubbed this restricted activity by groups of fake accounts “coordinated inauthentic behavior” (CIB), and its security teams began announcing widespread removals in monthly reports. Security teams also handle certain threats that cannot be made using fake accounts, such as fraud or cyber-espionage networks or direct influence operations such as certain state media campaigns.

Sources said the company’s teams had long debated how it should intervene at the network level to systematically break its rules for large movements of real user accounts.

In July, Reuters reported on the Vietnam Army’s online information warfare unit, which engaged in actions including mass reporting of accounts on Facebook, but often used their real names as well. Facebook removed some accounts over these mass reporting efforts.

Facebook is under increasing pressure from global regulators, lawmakers and employees to combat widespread abuse of its services. Others have criticized the company over allegations of censorship, anti-conservative bias or inconsistent enforcement.

The expansion of Facebook’s network disruption model to affect authentic accounts raises further questions about how the changes could affect the types of public debates, online movements, and campaign strategy across the political spectrum.

“At times, problematic behavior will appear very close to social movements,” said Evelyn Douc, a Harvard law lecturer who studies stage governance. “It’s going to hinge on this definition of harm … but obviously people’s definitions of harm can be quite subjective and vague.”

High-profile examples of coordinated activity surrounding last year’s US election, from teen and K-pop fans claiming they used TikTok to sabotage a rally for the former president Donald Trump In Tulsa, Oklahoma, political campaigns that pay online meme-makers have also sparked debate over how platforms should define and approach coordinated campaigns.

read all breaking news, breaking news And coronavirus news Here

.