As software becomes the key competitive advantage, organizations no longer enjoy the luxury of selecting either ‘speed’ or ‘quality’ to focus on – both are critical. Continuous Integration, Continuous Testing, and Continuous Delivery have emerged as key catalysts for enabling quality at speed. Of the three, Continuous Testing is by far the most challenging and requires dramatic changes to people, processes, and technology.
While Continuous Integration is primarily a tool-driven activity and Continuous Delivery is a tool- and team-driven activity, Continuous Testing involves tools, teams, individuals, and services.
When implemented correctly, Continuous Testing serves as the centerpiece of the Agile downstream process – executing automated tests as part of the software delivery pipeline in order to provide risk-based feedback as rapidly as possible. Mastering Continuous Testing is essential for controlling business risk given the increased complexity and pace of application delivery today.
Agile is about transforming people, processes, and technologies, and in spite of all this change, one thing tends to remain the same: the software testing process. One recent study reports that 70% of organizations have adopted Agile, but only 30% automate testing. A separate study found that while Agile adoption is now near 88%, only 26% of Agile organizations have broadly adopted test automation.
Consider how test cases are commonly written today. Business Analysts drop a 200-page, text-heavy Business Requirements Document (BRD) on Testers’ desks. Testers have to digest and analyze requirements for complete understanding and transform them into test cases covering all possible outcomes. They write them one-by-one in Microsoft Word or Excel – or if they’re lucky, a test management tool. For a large project with dozens of testers, this cumbersome, manual process is costly and error-prone.
In other words, testing processes remain stuck in the past even as organizations invest considerable time and effort into transforming their development processes to meet today’s and tomorrow’s business demands.
What makes test automation so hard within the Agile context is that the test object is constantly changing since it is gradually refined over sprints. If there is anything a test automation engineer is afraid of, it’s constant changes and the resulting high level of maintenance.
Despite these difficulties, the hard truth is that without automated testing, it is not possible to make Agile work to its full potential, at least not with frequent high-quality deployments.
When a test automation strategy is developed, it’s important to thoughtfully determine both what is worthwhile to automate and what should be automated first to get as large gains possible by automation.
There are many factors that can determine what should be automated first. Some of the candidates that are typopically first in line for automation include:
Avoid comparisons between manual and automated testing because both are needed and each serves a different purpose. Automated tests are a set of instructions written by a person to do a specific task. Every time an automated test is run, it will follow exactly the same steps as instructed and only check for things that it is being asked to check.
On the other hand, during manual testing, a tester’s brain is engaged and can spot other failures in the system. The test steps may not necessarily be the same every time, as the tester can alter the flows during the testing; this is especially true in case of exploratory testing.
To reduce the time your team spends writing functional test cases, there are tools that can automatically generate most of them for you from your defined requirements. This is possible for a couple of reasons:
First, requirements and tests have a special relationship. They offer two ways to envision how a product should work. Functional tests are almost always based on requirements artifacts.
Second, teams are increasingly using more visual techniques, like use cases and process models, to define requirements. Most modern requirements tools use these models as the basis for auto-generating test scenarios. They scan each possible decision point and branch within a process and generate a test case to address them all. For example, in an automated banking transaction with 3 possible outcomes – success, cancellation, or an error due to insufficient funds – auto-test generation will create separate test scenarios to provide full coverage for all possible situations.
Blueprint’s Storyteller has the capability to auto-generate test cases based on functional requirements. For procedural-based requirements, it generates a complete set of test cases for every scenario in every use case. It can also create sets of test cases designed for different testing levels. Testers that are focused on individual processes can use test cases generated from process models, while business stakeholders can use a higher-level, smaller number of test cases generated from use cases.
It’s important to know that auto-generation of test cases from requirements won’t cover all of the tests a team needs. Someone still needs to write the broader, more global tests, like those for non-functional requirements, system testing, and integration testing. But for projects focused on delivering user-facing software, with the large majority of requirements being functional, auto-generating test cases can give your teams a major jump start on confirming quality.