Why Does Automation Fail?

Posted by at 10:43h

Automation is supposed to save the world – testing world that is.   There are a lot of myths around test automation –  Testing can be 100% Automated;  Automation is a lot faster than manual testing;   Automation can be designed in a way so that it’s maintenance free;  Automation can be created without any scripting.   While Automation can truly accelerate the software development lifecycle, if done for the wrong reason or not implemented in the right way, it can actually hinder the progress. 

Here are the top 5 reasons why Automation often fails

1) Record and Play is used to create scripts.

There are many tools in the market claiming “script less automation” via record and playback. It may seem genius and amazingly simple at first. You press a record button, go thru the test and it magically runs back automatically. It even works the next day, up until the first code change, at which point it breaks and all of a sudden you need to understand the tool and the underlying technology in order to make it work again. Most often, these types of automation suites become completely unusable after several runs and have to be rerecorded. At which point it becomes easier to just test the functionality manually.

 

2) Script and data are not separated

Testing almost always involves entering data into or reading data from the application. While some data, like page title remains static, a lot of it like username, search strings, stock quotes, zip codes vary from test to test. If such data gets hard coded into the script then a different script will need to be created for each data variation. This results in a suite of tests which are essentially the same but if anything in the application changes, all of these have to be updated, which means maintenance hazard. Instead, data needs to be parametrized and separated from the scripts. In case of dynamic data validation like Today’s date or stock price, it’s best to use regular expressions or web service calls.

 

3) Unstable test environment

Test environment is one of the most common reasons for automation errors. This could be due to server unavailability, machines running out of memory or disk space, database locks, expired password, etc.. Any of these could cause not just one script but the whole automation suite to fail, which in turn results in wasted automation cycles and unreliable test results. While test environment issues seem to be out of tester’s control, there are several things that can be done to detect such issues before the automated execution and even to prevent them. An Automated environment smoke test can check web and database server availability, credentials validity and available disk space among others.  It can run continuously via CI server and alert concerned parties in case of any issues. This test can be further enhanced to perform triage tasks such as server reboot, password reset and disk cleanup.  Another way to reduce and even eliminate the environment issues is to store environment settings in a script file and spin up a fresh environment before every automated execution.

 

4) Automation scripts are different than manual test cases

In most companies Automation and Manual are two different teams which don’t have a lot of synergy.   While first version of automated scripts is developed from the manual test cases, soon after they begin to diverge.  Updates that are made to the manual scripts do not get reflected in the automated test cases and vice versa.   As a result, automated scripts can get outdated and do not reflect true coverage.   This can be resolved by using tools that allow to have manual and automation scripts as the same entity.

cogWheelMan

 

5) Ambiguous Test Results

The main reason why we test is to figure out if the application we are testing is working as expected.  Test results is the most essential component of a manual and automated test suite.  However, most automation engineers spend a lot more time on developing the scripts then on designing good test results reporting.   As a result, automation test results are often unreadable and provide information about automation script failures instead of the resulting application issues.   For example, the report might show that “Object ‘button’ exists on Page ‘Login’ is FALSE”   instead of saying that “Submit button missing on the Login page”.   Automation test results reports should looks same if not better than manual results reports.  In addition, the analysis of these results can be also automated to produce a summary report, group by failures, identify environment related issues, etc.

 

The main reason why automation fails is due to unrealistic expectations.  Automation can truly reinvent the testing process and speed up the time to market.  But it needs to be done right and for the right reasons.   All of the above points should be used as the guidelines to make your automation a success.