Press "Enter" to skip to content

Blocked 2 million accounts in India in 1 month, says WhatsApp in new report – Hindustan Times

New Delhi: WhatsApp blocked at least 2 million accounts in India from May 15 to June 15, a compliance report released by the Facebook-owned messaging platform said.

The accounts were blocked using automated technology.

The platform blocks around 8 million accounts a month around the world.

In this one-month period, the company received 345 complaints from India, where it has 530 million users. These included 70 cases of account support, wherein users reported problems regarding their accounts, but took action against none, 204 cases appealing the fact that their accounts were blocked (it acted against 63), and 20 issues of other support. There were also 43 cases of product support, referring to product services such as payments offered by the company, involving eight safety issues that were flagged. Overall, of the 345 complaints WhatsApp received, it acted against 63 accounts from May 15 to June 15.

The company has challenged the traceability clause of the social media and intermediary guidelines, which also mandate it to publish a monthly grievance compliance report. This is the first such report WhatsApp has published for India. WhatsApp is now the fourth company after Google, Facebook and Twitter to publish the report and appoint officers in keeping with the new guidelines.

A person familiar with WhatsApp’s compliance efforts said on condition of anonymity that automated take down of accounts happen using a range of behavioural signals, such as mass broadcasts and without breaking end-to-end encryption. “Even though WhatsApp is an encrypted platform, it takes action against accounts based on automated tools as it is committed to ensuring the safety of its users,” the person said.

Kazim Rizvi, founder, policy think tank, The Dialogue, said, “It is possible to take down accounts without breaking end-to-end encryption. WhatsApp utilises AI technology to detect spammy behaviour of users. For instance, if 100 messages are sent to unknown contacts within 15 seconds of registration, in such cases the platform may block the user. The technology keeps learning from such instances of blocking and then prevents abuse by blocking users if it can predict the same through signals and patterns. For instance, behavioural cues like bulk registration along with data points like IP Address help in detecting instances at registration itself. It is important to note that platforms cannot take down content due to end-to-end encryption and can only ban accounts.”

Facebook, which published a compliance report last week, said in a more detailed report published Thursday that it received 646 grievances from India between May 15 and June 15 through the newly created grievance channel, with the highest number of complaints being about accounts being hacked.

A person familiar with the matter said on condition of anonymity that while number of complaints were far lower on the grievance redressal channel as opposed to reports filed on the platform, the route provides an opportunity for people who do not have a Facebook account to flag content.

Instagram, also owned by Facebook, said it received 36 complaints, of which 25 were regarding nudity. “Of these incoming reports, we provided tools for users to resolve the issues in 10 cases. These include pre-established channels to report content for specific violations, self-remediation flows where they can download their data, avenues to address account hacked issues etc,” the report stated. Instagram took action against two million links across nine categories in the period between May 15 and June 15.

In Facebook’s first report released last week, it said it removed a little over 30 million pieces of content (posts, profiles, pages) between May 15 and June 15 in India. The links taken down related to content that was violent and graphic, contained adult nudity and sexual activity, and were about suicide and self-injury. The report released on Friday is an interim disclosure and the company said it will publish details on July 15.

“Platforms like WhatsApp also use technology to scan all the unencrypted data sets like Profile Pictures, Group Photos and Statuses to find out if anyone has uploaded any illegal or harmful content. Photo DNA technology is used to find if a user has uploaded child sexual abuse material as their Group or Profile photo. Moreover, any incident of child safety is reported to the Police for further investigation. Such accounts may also be blocked for violation of terms of service or violation of the law of the and. In all the above scenarios accounts are blocked both reactively, and with the help of machine learning technology, proactively, without weakening end-to-end encryption. The transparency report indicates that there are existing technology mechanisms that support safety and security efforts of the State to a reasonable extent, without having to break end-to-end encryption. Investment in such technologies across all similar platforms will create help create a safe user experience while protecting their privacy.” Rizvi added.

SHARE THIS ARTICLE ON