Don’t Fall Into These Traps When Planning Your Automation Strategy

Author : Tejaswini Simha’s experience with test automation and some of the expectation challenges that need to be managed

Knowledge of UI test automation tools is all that the test automation team needs

Not to go into the should testers have programming skills debate – but programming is an essential skill if you need to have a big automation picture. Knowledge of only the tools that they use (Selenium, SOAP etc) does not always a good automation engineer make. Good programming skills interspersed in the team, provide improved thought processes on how a test can be automated or a framework extended.


Automation speeds up their development process and they can shrink their sprint cycles  

Yes! Automation does speed up the overall process.  But people need to understand they have to spend time and money to get there.


100% automation is easy

Question is not about not being able to achieve 100% automation. But is it really worth the effort and cost. Not all tests need to be automated. Automation should focus on where maximum benefits can be achieved (i.e max shrinking in timelines and also in situations where manual checks are more prone to errors. Test data values to run against should be chosen judiciously.


Once automated, the same code should work on all versions

“Highly unlikely”. The tests depend on the functionality as well as to a certain extent the underlying technology (especially for test synchronization) as well.  Do expect that the automated scripts will need to be refactored as also the test framework with application changes (be it functional or technical).

Also, even if there are visually no changes, no functional changes and no great change in architecture, the tests can break because of the change in the object locators. Changing object locators across builds is a common phenomenon. Good development practices of object identification, and avoiding unnecessary DOM changes should be religiously followed.


ROI from automation should be high.

Well! One needs to consider the recurring expenditure on maintaining the test scripts (even though minimal)


Test automation is all about capturing a few scenarios and rerunning the same

UI automation tests are flaky as a rule. And expect it to be more so if it is a recorded one. As it is common knowledge, any generated code will not be optimal and will be difficult to maintain. In the same way, the automated test scripts that are generated when you record a test script will also be generated code – will not be optimal – will be very painful to maintain – it might not even work in every possible scenario


One stop automation tool that fits all scenarios

Technically that’s not feasible. UI automation is completely a different ball game when compared to API automation. Also not all tools are born equal or have equality thrust upon them!

UI Automation tools that take care of all the DOM changes by itself

Tests break with each new change, if the underlying DOM changes (and change it does sometimes due to developer indiscipline). Tools and frameworks that we have experimented with now all talk about being able to take care of the DOM changes automatically in their next version 😉 UI test runs, still need to watched carefully.


Native application tests should be seamlessly handled along with web UI tests     

Given Selenium’s popularity, it is the underlying tool for most popular frameworks. Selenium doesn’t inherently support any native application testing. So test frameworks will need to deal with executing tests across different sets of tools. However, integrating all of them and stabilizing the tests can be a challenge


Tests that are run in a batch should automatically get synchronized and adding individual tests to a batch should be smooth       

In practice batch synchronization is a big issue. While designing the flow of tests, care would not have been taken of the pre-requisites and also the context sensitivity of each test. So care should be taken while designing the flow of test cases. Even if a silly mouse hover over is missed in the automation test script the test will break whereas the same test would have passed manually because you would have moved the mouse to the required location without your knowledge.