Bullying, stalking, body-shaming and other forms of attacks on adolescent girls are not just confined to the physical world. They face this challenge in the cyber world too, making them sad, worried and dejected.
Vasudeva Varma, Head of the Information Retrieval and Extraction Lab at International Institute of Information Technology, Hyderbad (IIIT-H), says that 35-40 per cent of pre-teen girls reporting feeling low and unhappy. “About 20 per cent of them are clinically depressed necessitating medical intervention. The time they spend in the cyber world is also a major reason,” he says.
With a view to helping them fight toxicity in the cyber world, the IIIT-H has launched Project Angel that seeks to use NLP (Natural Language Processing) tools to build a ‘resident angel’ for their smart devices. “It will be in the form of a regular user of a social media platform. When you follow, it will look for objectionable content categorised for bullying or stalking with objectionable language. It will alert you,” he says. It will step in to protect them from online toxicity and steer them towards positivity with appropriate reading recommendations,” he says.
The researchers have begun working on fundamental building blocks such as detecting social biases online or toxicity in the form of body-shaming, the presence of echo chambers and sexual harassment. “We are essentially building NLP tool sets or models to understand the language of teens across continents. In the first phase, the tool will smart read the content and classify and categorise it,” he says.
The angel app would look for positive vibes around in the web world, capture that and present it to the user. “What this means is that if positive messages are found on Twitter, we are trying to transfer that into an Instagram message so that the message will come from the ‘angel’ present on the Instagram network,” Varma says.
With an anthropologist and a researcher at Adobe, whose field of interest is affective computing (a branch of computing that understands and develop systems that can interpret, process and stimulate emotions.) on board, this multi-disciplinary project seeks to bring in novel insights and analyses of not just language in social media, but also images posted and shared online.
The Information Retrieval and Extraction Lab researchers work on identifying toxic content online, which includes hate speech, sexist diction. The deep neural networks developed at the lab could detect online sexist comments, label them and categorise them.
Nimmi Rangaswamy, a ‘human-computer interactions’ anthropologist, says she is focussing on Instagram which is considered to be the place-to-be for the young netizens. Her students are following a curated bunch of influencers on Instagram and trying to analyse their posts and the type of comments they are attracting.
Source: The Hindu