You are about to finalize your RPA bot development. You’re pleased with your work and excited to release your bots into the production environment to achieve the promised value.
However, if you have not performed all the necessary checks to ensure your bots adhere to your company standards and, more importantly, remain under control, that excitement could quickly turn into a frantic scramble to fix what you’ve just released.
A production entry checklist is a vital tool to ensure that your bots adhere to business rules and standards and remain compliant with any regulations your organization is subject to.
The bot production entry checklist should be understood not merely as a list of items to check off, but as an instrument for driving discipline and ensuring consistency. If bot builders inside and outside your organization follow all established policies and procedures defined by the Centre of Excellence (CoE), going through the checklist will be like a walk in the park.
So what must a first-class bot production entry checklist consider?
Here are 6 essential RPA checks that you must perform for each bot that is about to get released in production:
1. Risk Assessment has been Carried Out
Ensure the appropriate IT and/or Business compliance oversight function completes a risk assessment for each and every bot. This exercise will allow you to determine if your bot is “critical” (i.e. bot executing a SOX control, bot reading and processing sensitive personal data, etc.). The more critical your bot is, the more monitoring mechanisms should be established. This is a crucial check, especially if your organization operates in a global and highly-regulated environment.
2. Code Review has been Performed
Ensure that you perform an independent code review for each bot against your development principles and standards. This is THE most important and challenging activity. When your organization begins to scale bot development, a lot of developers (from different external service providers and/or internal IT teams) will be working across multiple locations and probably under high pressure to deliver.
It is crucial to have independent code reviewers, sitting in a central or local function - being a dedicated CoE or an empowered IT function. These resources should be highly skilled (subject matter experts knowing how to code in relevant programming languages - based on the RPA tools used) and should be trained to ensure all the development standards have been followed. These resources are ultimately the gatekeepers of the production environment.
The role of independent code reviewers should go beyond the compliance aspects (detection of errors and malicious code): they need to ensure that your bot development is reusable. Code review is also crucial to ensure that you have highly maintainable and expandable bots, and not just “toy” bots.
3. Resilience Assessment has been Carried Out
Ensure that bot resilience has been assessed and measured against an established model. As a rule of thumb, the less resilient your bot is, the more monitoring mechanisms you should put in place. In conjunction to the resilience factor, you should also consider the criticality factor (see my first point). The degree of resilience combined with the criticality level should ultimately drive the nature and level of monitoring procedures.
So what makes a bot not resilient?
- Non-static internal/external application interfaces
- High frequency of changes to processes
- Unpredictable or unstable data formats
- Volatile bot runtime environment
You should establish a mix of manual and automated change communication protocols to flag changes to the environment where the bots operate.
Automated vs. manual change communication protocols
A manual change communication protocol is about flagging changes via manual procedures or processes that are highly reliant on human intervention. For example, sitting in technical design authority boards or establishing a technical monitoring board with all bot owners to identify upcoming technical changes. These upcoming changes can be flagged and cascaded to the relevant teams to run an impact assessment and understand the impact on bots and other downstream processes before deciding on bot re-calibration activities.
An automated change communication protocol is about flagging changes via automatic mechanisms that are not reliant on human intervention. An example of a continuous monitoring mechanism that would be applicable here would be the ability to have regulatory change information auto-populated into your business process modeling tool as changes occur. In tools with such capabilities, such as Blueprint’s Enterprise Automation Suite, any changes to regulatory artifacts that apply to your business are brought into the platform, processed, and tagged through NLP to ensure the information is understood and the downstream impacts it may have on business processes and bots can be easily identified.
4. Business Continuity Plan has been Defined and Communicated
Ensure that a business continuity plan has been formally documented and approved by relevant stakeholders (in particular, Process Owners, IT Service Management and Business Service Management functions). In the event of a bot failure, the purpose of the business continuity plan is to detail the business process and contacts, as well as a set of contingencies and recovery time objectives, required to minimize potential risk or harm to the business. Each bot should have a tailored action protocol that should guide bot operators during operation recovery and issue resolution.
Incident resolving agencies for issue resolution should be called out in the business continuity plan to avoid any confusion. You should map your expected issues/ error logs to the appropriate resolving agencies (i.e. if virtual desktop cannot be reached, contact Infrastructure team A; if access to SAP ECC is failing, contact Security team B; if upload file for contract loading is missing from remedy ticket, contact Contract Management team C; if code is missing an "If" statement, contact RPA development team D, etc.). Establish a decision tree for each bot.
It is essential (especially for bots flagged as critical) to establish an accelerated issue remediation path to quickly move code corrections to the production environment. The speed of reaction can be crucial in order to avoid significant disruptions (especially if business users are not responsive enough outside of business hours).
By mapping your business continuity plan alongside all of your other processes in a tool such as Blueprint’s Enterprise Automation Suite, you can establish a ‘single source of truth’ on what the protocol is in the event of a bot failure, and rest assured that the contingency process is up to date and easily accessible to all relevant stakeholders.
5. Traceability has been Established/Maintained
You should ensure that your bot is traced back to its origin. If, for example, you are using an RPA platform to support your bot development, you should be able to go back to your planning/process modeling tool in order to quickly find specific deliverables (a design document, a scheduling plan, a business continuity plan, a test script, etc).
Traceability is key here for one main reason: bot developers and bot operators can go back to all relevant documentation when monitoring, re-calibrating or improving bots. Needless to say, being able to trace back all approval workflows is also key from accountability and compliance perspectives.
6. Security Procedures and Mechanisms have been Enabled
Ensure that bot access is allowed, approved and controlled. Bots will transact with a wide range of systems/ applications, so it is very important to ensure that appropriate security mechanisms are enabled prior to releasing in the production environment (this should be a one-time readiness activity per system/ application). For example, system owners, together with the relevant security team, need to adjust their existing system access policies and procedures to clearly document how bot accounts will be created, maintained and monitored.
This should be a prerequisite before deploying a bot in any applications/ systems. Key things to consider are: account type, access type (standard, privileged, etc.), compliance checks (segregation of duties analysis, etc.), provisioning process, approval process, periodic authorization review, auditability of bot account usage, monitoring of bot account usage from a non-authorized terminal, etc.
Technology to Digitally Enable your Bot Pre-Release Checklist
It’s clear that a production entry checklist can be a critical instrument for driving discipline, ensuring consistency, and ensuring that your bots adhere to business rules and standards and remain compliant with any regulations your organization is subject to.
However, instead of relying on a simple checklist, the easiest way to ensure all bot builders inside and outside your organization follow all established policies and procedures defined by the Centre of Excellence (CoE) is to leverage technology that enforces these policies and ensures all relevant information is accessible, versioned, and traced to all related processes and bots.
Blueprint Enterprise Automation Suite helps organizations achieve the checks and balances outlined above in a number of ways. When a process has been established and is ready for automation, the solution can quickly and easily generate test scripts so that code reviewers do not have to manually write all the tests needed to ensure the bot will perform as required. This simplifies and expedites the code review step of the checklist.
Blueprint Enterprise Automation Suite supports the creation of numerous types of processes. This means that change communication protocols and business continuity plans can also be outlined in the platform, ensuring that bot resilience can easily be assessed and measured against an established model. In the event of a bot failure, there is a formally documented and approved plan that details the business process and contacts, as well as contingencies and recovery time objectives, that will help to minimize potential risk or harm to the business.
Lastly, every artifact within the Blueprint Enterprise Automation Suite can be traced to any other object - including value streams, regulations, business rules, policies, and controls, or non-functional requirements. This makes it easy to assess the impact on a bot should a change occur, as well as provide bot developers and operators with the relevant documentation needed when monitoring, re-calibrating or improving bots. Needless to say, being able to trace back all approval workflows is also key from accountability and compliance perspectives.
To learn how Blueprint’s Enterprise Automation Suite will help you confidently release bots to production, as well as scale your RPA initiatives enterprise-wide, download our Enterprise Automation Suite datasheet.