Press "Enter" to skip to content

Opinion | We need to audit our bots to avoid loss of integrity

Robotic process automation (RPA), also known as software robot or bot, mimics human actions and can help an organization automate manual iterative tasks. Regulatory reporting dashboards, credit control validations, customer complaint interfaces are all good use-cases for deployment of bots. While bots help an organization/function to deliver its services faster, better and cheaper, they may also expose the organization to the risk of inaccurate or erroneous processing.

You do not want a bot to push false alarms to business partners, you do not want customer complaints to be pushed to wrong agents, you do not expect inaccurate credit score processing, etc. While it’s important to leverage the benefits of automation, it’s equally important to manage the risk of deploying this technology in your organization.

Bot auditing is critical to ensure an organization has a measured approach to development of bots. An organization should build “security by design” in its development framework. While an organization audits its bots, emphasis should be given to coverage of business process controls in robotics environment (SoX / ICoFR controls), compliance with regulatory requirements (e.g. data privacy standards, PCI DSS), cybersecurity controls, incident/failover controls and multiple other areas.

It is important to note that a bot environment may have specific controls as part of its implementation. Hence, it is expected that an organization brings relevant specific changes to its control environment.

Threat Surface

In order to determine the risk of a bot, it is important to identify and analyse the threat surface of the bot. The overall threat surface of a bot can be classified into three categories:

Bot vulnerabilities: Bots like any other application will have vulnerabilities which the external attackers will try to exploit. These vulnerabilities will be identified during the vulnerability assessments and application security testing performed by a firm.

Bot abuse: Bot does not have either the intent or the free will to perform a fraudulent transaction. However, a bot can be abused/manipulated/trained by the bot owner (or any other malicious insider with access) to perform fraudulent transactions or leak confidential data.

Bot mutation: A bot is designed to perform a fixed sequence of tasks required to complete the process. However, in real life there might be certain contingencies which may cause the bot to perform the transaction partially or incorrectly leading to loss of integrity. This risk significantly increases for intelligent automations involving ML, natural language processing, etc.

At the end of the day, bot is a software that mimics human actions. Hence, there is a tendency to treat it like any other application. However, treating a bot like an application will only help you to identify bot vulnerabilities and not how the bot can be manipulated or malfunction under certain circumstances.

Thus, risk assessment of a bot requires a multiskilled team which consists of RPA technical architects, business analysts along with cybersecurity professionals. Without a holistic approach to RPA risk assessment, one may just end up scratching the surface.

Rohit Mahajan is president (risk advisory) at Deloitte India.

Source: Livemint