Facebook removes 30 million posts according to new IT rules 2021, reveals compliance report

Facebook “action” more than 30 million content pieces across 10 infringement categories during May 15 to June 15 in the country, the social media giant said in its first monthly compliance report mandated by IT regulations. Instagram took action against nearly two million pieces. Nine categories during the same period. Under the new IT rules, large digital platforms (with over 5 million users) will have to publish periodic compliance reports every month, detailing complaints received and action taken on them. The report also includes the number of specific communication links or parts of information that the moderator has removed or disabled access to in pursuance of any active surveillance conducted using automated tools.

while Facebook Over 30 million pieces of content were taken across multiple categories from May 15 to June 15, with Instagram taking action against nearly 2 million pieces. A Facebook spokesperson said that over the past few years, Facebook has continuously invested in technology, people and processes to advance its agenda of keeping users safe and online and enabling them to express themselves freely on its platform. enabled. “We use a combination of artificial intelligence, reports from our community and reviews by our teams to identify and review content that goes against our policies. We will continue to add more information,” the spokesperson said in a statement to PTI. and will build on these efforts toward transparency as we develop this report.

Facebook said its next report would be published on July 15, detailing complaints received and action taken. “We expect to publish subsequent versions of the report with an interval of 30-45 days after the reporting period to allow sufficient time for data collection and verification. We will continue to bring more transparency to our work. and will include more details about our efforts in future reports.” Earlier this week, Facebook said it would publish an interim report on July 2 detailing the number of content removed. Actively during 15th May to 15th June. The final report will be published on July 15, detailing the user complaints received and action taken.

The July 15 report will also include data related to WhatsApp, which is part of Facebook’s family of apps. Other major platforms that have made their reports public include Google and domestic platform Ku. Facebook said in its report that it processed more than 30 million content across 10 categories from May 15 to June 15. This includes content related to spam (25 million), violent and graphic content (2.5 million), adult nudity, and sexual activity. (1.8 million), and hate speech (311,000). Other categories under which the content was processed include bullying and harassment (118,000), suicide and self-injury (589,000), dangerous organizations and individuals: terrorist propaganda (106,000) and dangerous organizations and individuals: organized hatred (75,000) ).

“Action Taken” refers to the number of pieces of content (such as posts, photos, videos or comments) where action has been taken for violating standards. Taking action may include removing a piece of content from Facebook or Instagram or covering photos or videos with warnings that may disturb some viewers. The active rate, which indicates the percentage of all content or accounts on which Facebook found and flagged users using the technology before it was reported, was between 96.4-99.9 percent in most of these cases. The active rate for removal of content related to bullying and harassment was 36.7 percent because the content is contextual and highly personal in nature. In many instances, people are required to report this behavior to Facebook before it can recognize or remove such content.

For Instagram, 2 million pieces of content were processed across nine categories from May 15 to June 15. This includes content related to suicide and self-injury (699,000), violent and graphic content (668,000), adult nudity and sexual activity (490,000). , and bullying and harassment (108,000). Other categories under which the content was processed include hate speech (53,000), dangerous organizations and individuals: terrorist propaganda (5,800), and dangerous organizations and individuals: organized hate (6,200). Google said that in April this year Google and YouTube received 27,762 complaints from individual users in India for alleged violations of local laws or individual rights, resulting in the removal of 59,350 content.

Ku said in its report that it actively moderated 54,235 content, compared to 5,502 posts by its users during June. As per the IT rules, important social media intermediaries are also required to appoint a Chief Compliance Officer, a Nodal Officer and a Grievance Officer and these officers are required to be resident in India. Non-compliance with IT regulations will result in these platforms losing their intermediary status which gives them immunity from liabilities on any third party data hosted by them. In other words, they may be liable for criminal action in case of complaints.

Facebook recently named Spurthi Priya as its Grievance Officer in India. India is a major market for global digital platforms. According to recently cited figures by the government, India has 53 crore WhatsApp users, 41 crore Facebook subscribers, 21 crore Instagram subscribers, while 1.75 crore account holders are on the microblogging platform Twitter.

read all Breaking Newshandjob today’s fresh news and coronavirus news Here

.

Leave a Reply