Facebook selective in curbing hate speech, anti-Muslim content in India: Report

Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, especially anti-Muslim content, according to leaked documents obtained by the Associated Press, even as the internet giant’s own employees have said. Its motivations and interests have been cast in doubt.

Based on research produced in the form of a recent company memo in March of this year, which dates back to 2019, internal company documents on India highlight FacebookIn the world’s largest democracy and the company’s largest growth market, it continues to struggle to eliminate abusive content on its platform. India has a history of simmering communal and religious tensions on social media and inciting violence. The files show that Facebook has been aware of the problems for years, raising questions about whether it has done enough to address the issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party are involved.

Across the world, Facebook has become increasingly important in politics, and India is no different.

Modi has been credited with leveraging the platform to his party’s advantage during elections, and last year’s The Wall Street Journal reporting cast doubts over whether Facebook would use hate speech to avoid a blow from the BJP. was selectively implementing its policies. Modi and Facebook’s Chairman and CEO Mark Zuckerberg have displayed a friendly demeanor, which is memorialized by a 2015 image of the two hugging at Facebook headquarters. The leaked documents include a slew of internal reports from the company on hate speech and misinformation in India, which in some cases appear to be accelerated by its own “recommended” feature and algorithms. These also include concerns about the company’s employees’ mishandling of these issues. And his discontent over “malicious” being viral on stage.

According to the documents, Facebook saw India as one of the most “at-risk countries” in the world and identified both Hindi and Bengali as priorities for “automation on the infringement of hostile speech”. Still, Facebook didn’t have enough vernacular moderators or content-flagging to stop misinformation, which at times led to violence in the real world.

In a statement to the AP, Facebook said it had made “significant investments in technology to find hate speech in a variety of languages, including Hindi and Bengali” which “reduced the amount of hate speech by half” in 2021.

“Hate speech against marginalized groups, including Muslims, is on the rise globally. That’s why we’re improving enforcement and committed to updating our policies as hate speech evolves online.” This AP story, among others published, is based on disclosures made to the Securities and Exchange Commission and Congress. Revised forms were provided by former Facebook employee-turned-whistleblower Frances Hogen’s legal advisor. The revised versions were obtained by a consortium of news organizations including the AP.

Back in February 2019 and ahead of a general election, when concerns of misinformation were running high, a Facebook employee wanted to understand what a new user in India saw on their News Feed if they only visited those pages and groups. Followed which were only recommended by the forum. .

The employee created a test user account and kept it live for three weeks, a period during which an extraordinary event shook India – a terrorist attack in disputed Kashmir killed more than 40 Indian soldiers, leaving the country with rival Pakistan. The war drew near.

In the note, titled “An Indian test user’s descent into a sea of ​​polarizing, nationalist messages”, the employee whose name has been redacted said they were “shocked” by the content flooding the news feed. The person described the material as becoming “a near constant barrage of nationalist material, misinformation, and violence and polarization of gore.” The seemingly benign and spontaneous groups recommended by Facebook quickly turned into something more entirely, where hate speech, unverified rumors and viral content ran rampant.

The recommended groups were full of fake news, anti-Pakistan rhetoric and Islamophobic content. Most of the material was extremely graphic.

One involved a man wearing another man’s blood-soaked head covered in a Pakistani flag, with an Indian flag partially covered. Its “Popular Across Facebook” feature featured several unverified material relating to retaliatory Indian attacks in Pakistan after the bombings, including an image of a napalm bomb from a video game clip rejected by a fact-check partner of Facebook.

“Following this test user’s news feed, I have seen more images of dead people in the past three weeks than I have seen in my entire life,” the researcher wrote.

The report raised deep concerns about what such divisive material could be in the real world, where local news outlets at the time were reporting attacks on Kashmiris.

“Should we as a company take additional responsibility to prevent loss of integrity from recommended content?” The researcher asked in his conclusion.

The memo circulated with other employees did not answer that question. But it did highlight how the platform’s own algorithms or default settings played a role in the creation of such objectionable content. The employee noted that there were obvious “blind spots” in particular “vernacular material”. He said he hopes these findings will start a conversation on how to avoid such “integrity pitfalls”, especially those that are “significantly different” from the typical US user.

Even though the research was conducted over the course of three weeks that were not an average representation, he acknowledged that it showed that such “unmodified” and problematic content “may be completely eliminated” during “a major crisis event”. “.

A Facebook spokesperson said the test study “inspired a deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them.”

“Separately, our work to curb hate speech continues and we have further strengthened our hate classification to include four Indian languages,” the spokesperson said.

read all breaking news, breaking news And coronavirus news Here. follow us on Facebook, Twitter And Wire.

.