Note from the author: My perspective on most things is that the 'glass is half full' rather than half empty. This attitude carries over to the advice I suggest on automated software testing as well. I should point out, however, there is an ever increasing awareness from others experienced in this field, as well as from my own experience, that many efforts in test automation do not live up to expectations. A lot of effort goes into developing and maintaining test automation, and even once it's built you may or may not recoup your investment. It's very important to perform a good cost/benefit analysis on whatever manual testing you plan to automate. The successes I've seen have mostly been on focused areas of the application where it made sense to automate, rather than complete automation efforts. Also, skilled people were involved in these efforts and they were allowed the time to do it right.
Test automation can add a lot of complexity and cost to a test team's effort, but it can also provide some valuable assistance if its done by the right people, with the right environment and done where it makes sense to do so. I hope by sharing some pointers that I feel are important that you'll find some value that translates into saved time, money and less frustration in your efforts to implement test automation back on the job.
KEY POINTS The truth is that developers can produce code faster and faster with more complexity than ever before. Advancements in code generation tools and code reuse are making it difficult for testers to keep up with software development. Test automation, especially if applied only at the end of the testing cycle, will not be able to keep up with these advances. We must pull out all stops along the development life cycle to build in good quality software and test as early and often as possible with the assistance of test automation.
BENEFITS
There are some common 'perceived' benefits that I like to call 'bogus' benefits. Since test automation is an investment it is rare that the testing effort will take less time or resources in the current release. Sometimes there's the perception that automation is easier than testing manually. It actually makes the effort more complex since there's now another added software development effort. Automated testing does not replace good test planning, writing of test cases or much of the manual testing effort.
COSTS
COMMON VIEW
There are some inherent problems with this paradigm. First, test automation is only applied at the final stage of testing when it is most expensive to go back and correct the problem. The testers don't get a chance to create scripts until the product is finished and turned over. At this point there is a tremendous pull on resources to just test the software and forgo the test automation effort. Just using capture/playback may be temporarily effective, but using capture/playback to create an entire suite will make the scripts hard to maintain as application modifications are made.
TEST and AUTOMATE EARLY
WORK WITH DEVELOPERS The same approach should be applied at each subsequent level of testing. Apply test automation where it makes sense to do so. Whether homegrown utilities are used or purchased testing tools, it's important that the development team work with the testing team to identify areas where test automation makes sense and to support the long term use of test scripts. Where GUI applications are involved the development team may decide to use custom controls to add functionality and make their applications easier to use. It's important to determine if the testing tools used can recognize and work with these custom controls. If the testing tools can't work with these controls, then test automation may not be possible for that part of the application. Similarly, if months and months of effort went into building test scripts and the development team decides to use new custom controls which don't work with existing test scripts, this change may completely invalidate all the effort that went into test automation. In either case, by identifying up front in the application design phase how application changes affect test automation, informed decisions can be made which affect application functionality, product quality and time to market. If test automation concerns aren't addressed early and test scripts cannot be run, there is a much higher risk of reduced product quality and increased time to market. Working with developers also promotes building in 'testability' into the application code. By providing hooks into the application testing can sometimes be made more specific to any area of code. Also, some tests can be performed which otherwise could not be performed if these hooks were not built. Besides test drivers and capture/playback tools, code coverage tools can help identify where there are holes in testing the code. Remember that code coverage may tell you if paths are being tested, but complete code coverage does not indicate that the application has been exhaustively tested. For example, it will not tell you what has been 'left out' of the application.
CAPTURE/PLAYBACK
Capture/playback functionality can be useful in some ways. Even when creating small modular scripts it may be easier to first capture the test then go back and shorten and modify it for easier maintenance. If you wish to create scripts which will obviously provide immediate pay back, but you don't care if it's maintainable, then using capture/playback can be a very quick way to create the automated test. These scripts typically are thrown away and rebuilt later for long term use. The capture/playback functionality is also good to use during the design phase of a product if there's a prototype developed. During usability testing, which is an application design technique, users sit at the computer using a mock up of the actual application where they're able to use the interface, but the real functionality has not yet been built. By running the capture/playback tool in capture mode while the users are 'playing' with the application, recorded keystrokes and mouse movements can track where the users move on the system. Reading these captured scripts help the designers understand the level of difficulty in navigating through the application.
PLAYERS
If the project is just beginning with test automation then having someone who can champion the test automation effort is important. This 'champion' should have skills in project management, software testing and software development (preferably a coding background). This 'champion' is responsible for being the project manager of the test automation effort. This person needs to interact well with both the testers and the application developers. Since this person may also be actively involved with writing scripts as well, good development skills are also desirable. This person should not be involved with the designing of test cases or manual testing other than to review other team member's work. Typically there is not enough time to both design test cases and design test automation. Nor is there time to build test scripts and run manual tests by the same person. Where the testing effort is large the distinction between these two roles apply to teams of automators and testers as well. Too many times test automators are borrowed to performed manual testing never to realize the benefits of test automation in the current or future releases of the application.
This is not to say that the role of testers is reduced. Test planning still needs to be done by a test lead, test cases still need to be designed and manual testing will still be performed. The added role for these testers is that they most likely will begin to run the automated test scripts. As they run these scripts and begin to work more closely with the test automation 'champion' or test automators, they too can begin to create scripts as the automated test suite matures.
Experience has shown that most bugs are not found by running automated tests. Most bugs are found in the process of creating the scripts, or the first time the code is tested. What test automation mostly buys you is the opportunity to not spend valuable man-hours re-testing code that has been tested before, but which has to be tested in any case because the risk is too high not to test it. The other benefit comes from the opportunity to spend these man-hours rigorously testing new code for the first time and identifying new bugs. Just as testing in general is not a guarantee, but a form of insurance, test automation is a method to have even more insurance.
SOME NUTS AND BOLTS
Again, start off small when designing scripts. Identify the functional areas within the application being tested. Design at a higher level how each of these functional areas would be automated, then create a specific automated test design for one of the functional areas. That is, what approach will be used to create scripts using test cases as the basis for automating that function? If there are opportunities to use common scripting techniques with other testing modules, then identifying these common approaches as potential standards would be useful in creating maintainable scripts.
Use a similar approach to design and create scripts for some of the other functional areas of the application. As more experience is gained from automation then designing and building scripts to test the integration of these functional areas would be the next step in building a larger and more useful testing suite.
Since the purpose of automating testing is to find bugs, validations should be made as tests are performed. At each validation point there is a possibility of error. Should the script find an error, logic should be built into it so that it can not only report the error it found but also route back to an appropriate point within the automated testing suite so that the automated testing can continue on. This is necessary if automated tests are to be run overnight successfully. This part of test automation is the 'error recovery process'. This is a significant effort since it has to be designed in for every validation point. It's best to design and create reusable error recovery modules that can be called from many validation points in many scripts. Related to this are the reports that get generated from running the tests. Most tools allow you to customize the reports to fit your reporting needs.
It's also important to write documented comments in the test scripts to help those who would maintain the test scripts. Write the scripts with the belief that someone else will be maintaining them.
In the automation test design or documented within the test scripts also identify any manual intervention that is necessary to set up the test environment or test data in order to run the scripts. Perhaps databases need to be loaded or data has to be reset.
TEST DATA
Another method of setting up the data is to create tests scripts that run and populate the database with the necessary data to be used in automated tests. This may take a little longer to populate, but there's less dependency on data structures. This method also allows more flexibility should other data change in the database.
Even though I mention 'databases' specifically, the concepts apply to other types of data storage as well.
Other people with test automation experience have used 'randomly' generated data successfully to work with their test scripts. Personally, I have no experience using randomly generated data, but this is another option worth looking into if you're looking for other ways to work with data.
POTENTIAL RISKS
If contractors are used to help build or champion the test automation effort because of their experience, there is the risk that much of the experience and skills will 'walk away' when the contractor leaves. If a contractor is used, ensure there is a plan to back fill this position since the loss of a resource most likely will affect the maintenance effort and new development of test scripts. It's also just as important that there is a comprehensive transfer of knowledge to those who will be creating and maintaining the scripts.
Since the most significant pay back for running automated tests come from future releases, consider how long the application being tested will remain in its current state. If a rewrite of the application is planned in the near future or if the interface is going to be overhauled, then it probably makes sense to only use test automation for immediate pay back. Again, here's where working with application designers and developers can make a difference, especially if internal changes are planned that may not appear to affect the testing team, but in reality can affect a large number of test scripts.
SUMMARY
If you have experiences that are different than these that you've found successful, or if you've experienced hardships using some of these recommendations, I'd be grateful to hear from you. Many people, including myself, are interested in finding out what really works in creating higher quality software more quickly.
Comments can be sent to me at:
zallar@testingstuff.com
I also support a testing web page that has links to quality organizations, tool vendors, consultants, and other testing references. It is not the best testing web page out there, but there are pointers to some other excellent web pages and they are noted. This URL for this page is: http://www.testingstuff.com |