WhatsApp bans 16.6 lakh Indian accounts in April
Meta-owned instant messaging platform WhatsApp banned 16.6 lakh Indian accounts in April 2022 (April 1-April 30) for violating the company's guidelines under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The company had banned over 18 lakh accounts in March 2022, as per its latest monthly report.
The report contains details of the user complaints received and the corresponding action taken by WhatsApp, as well as WhatsApp’s own preventive actions to combat abuse on its platform.
WhatsApp received 844 reports on the topic of the complaint, of which it took 'action' against 123. Account support received 90 reports, of which action was taken against none. Ban appeal topic received 670 complaints, of which action was taken against 122. Other support, product support and safety topics received 37, 34, and 13 complaints, of which action was taken against only 1.
"We employ a team of engineers, data scientists, analysts, researchers, and experts in law enforcement, online safety, and technology developments to oversee these efforts. We enable users to block contacts and to report problematic content and contacts to us from inside the app. We pay close attention to user feedback and engage with specialists in stemming misinformation, promoting cybersecurity, and preserving election integrity," says a company statement.
WhatsApp has over 400 million users in India. "Over the years, we have consistently invested in state-of-the-art technology, artificial intelligence, data scientists, experts, and processes, to support user safety," the company adds.
Meta, earlier known as Facebook, is yet to release its monthly action taken report. In its community standards Enforcement Report on May 17, 2022, Meta said the prevalence of violating content on Facebook and Instagram remained relatively consistent but decreased in some of its policy areas from Q4 2021 to Q1 2022 (Jan-March).
Also Read: Now access Digilocker services on WhatsApp
During the Jan-March period, over 1.8 billion pieces of spam content, an increase from 1.2 billion in Q4 2021, was actioned after a "small number of users" were found making a large volume of violating posts globally.
The company said the action was taken against 21.7 million pieces of violence and incitement content, an increase from 12.4 million in Q4 2021.
On Instagram, the action was taken on over 1.8 million pieces of drug content, an increase from 1.2 million from Q4 2021, due to updates made to its proactive detection technologies.
The company says it saw an increase in the proactive detection rate of bullying and harassment content from 58.8% in Q4 2021 to 67% in Q1 2022. "We continued to see a slight decrease in prevalence on Facebook from 0.11-0.12% in Q4 2021 to 0.09% in Q1 2022," says the company.