{"id":33976,"date":"2022-02-18T07:08:26","date_gmt":"2022-02-18T12:08:26","guid":{"rendered":"https:\/\/centricconsulting.com\/?p=33976"},"modified":"2022-03-24T13:58:18","modified_gmt":"2022-03-24T17:58:18","slug":"moving-from-manual-to-automated-software-tests","status":"publish","type":"post","link":"https:\/\/centricconsulting.com\/blog\/moving-from-manual-to-automated-software-tests\/","title":{"rendered":"Moving from Manual to Automated Software Tests"},"content":{"rendered":"

Many organizations want the benefits from automated software testing. Most see the “next step” in testing as building automated versions of manual tests. But is this what you need? We explore which choice is right for you in this blog.<\/h2>\n
\n

Over the last several years, I worked with teams with very similar problems. Each of their organizations was trying to \u201cgo Agile,\u201d and for their leadership, this meant \u201cautomate all the things,” but automated software tests<\/a> aren’t always the answer.<\/p>\n

The teams all faced similar problems. Many were the result of adding manual \u201cfunctional\u201d tests, with developed features to the \u201cregression suite.\u201d There, the teams joined their manual tests to other manual tests. Some were simple \u201chappy path\u201d scripts intended to make sure some functionality worked.<\/p>\n

Some tests were legacy scripts with an unknown purpose, and the people working with the software did not understand its function. They had too many manual tests to run and not enough time to run them. This left no time to test the new development properly.<\/strong><\/p>\n

One organization barely had time to get their team to run the tests on a single OS platform, let alone the plethora of systems they needed to support. Their solution was to run the suite of tests on a single OS platform combination each time and the next time run them on a different platform.<\/p>\n

Automate Everything?<\/h2>\n

The solution to the teams’ problems appeared to be \u201cautomate all the tests, so they can run more efficiently.\u201d This may be a reasonable approach sometimes. But before embarking on that approach, I suggest asking a clarifying statement. For example, \u201cIf you know the purpose of each manual test, automating that test might be a good idea.\u201d The important part for each test is, \u201cWhat do you expect this test to tell you?\u201d<\/p>\n

Many organizations have huge test suites. They include functional, regression and performance tests. They sometimes include tests of no known purpose. These same organizations talk about how many test cases or suites they run on a regular basis but automating those suites can be very problematic. I’ve run into many clients who talk about how many “tests” they have run, except no one actually looks to see what these tests are doing.<\/strong><\/p>\n

I remember one client that had “Over 300 tests!” but when I walked through the tests with them, they all had “Assert = True,” so they always worked. The tests, however, did not actually do anything.<\/p>\n

Choosing How to Test the Software<\/h2>\n

The question we need to ask about every test<\/a>, manual or automated, is \u201cWhy are we running this test?\u201d We can have a general idea about what we want to check regularly while focusing on the software’s core functions working under specific conditions. Identifying those conditions takes time and effort.<\/p>\n

Before creating any tests, we must first determine what we need to test. When looking at a test, one question to consider is, \u201cWhat can this test tell us that other tests won\u2019t?\u201d<\/strong><\/p>\n

Without that question, testers often add duplicated scenarios to test suites. These duplicated tests require review and refinement to keep the full test suite repository as relevant and atomic as possible. Having redundant tests for variances in implementation environment, platform and OS may seem thorough. But is that a good use of time and computing resources? There are likely better options.<\/p>\n

Sometimes, testers or Software Development Engineers in Test (SDET) intend to exercise redundant tests in multiple environments and have at least some understanding of their purpose. Often others are run simply because they are in the list of scripts to be run.<\/strong><\/p>\n

Let\u2019s look at a couple of different ways you can introduce automated testing into your practices.<\/p>\n

Automating Existing Tests<\/h3>\n

One concern with the \u201cautomate everything\u201d mindset is the presumption the existing manual tests were carefully crafted for specific purposes. This also suggests developers regularly test and maintain the existing test scripts. Very few scriptwriters will ask why you need these tests, nor will they ask about the differences between tests. In some cases, they may not understand what they are automating.<\/p>\n

It\u2019s likely (at least in the instances where I\u2019ve seen the \u201cautomate everything\u201d model implemented) no one will ask any of these questions for a long time after they implement automated testing.<\/strong><\/p>\n

When looking at functional testing, many organizations use a check-the-box approach. Leadership may not say it that way, but by applying pressure to work faster on testing, that is the message they send. In response, testers will write a quick test covering a simple happy path scenario. They will not often write or execute any tests beyond the stated requirement. These are easy to automate and often ignore potential risks. When I asked testers and managers about the tests, I\u2019ve received responses focusing on main functionality and not edge cases.<\/p>\n

Automating Exploratory Tests<\/h3>\n

We rarely find odd behavior in the simple, happy path. Exercising the application with an attitude of \u201cwhat happens if this should occur\u201d uncovers unusual behavior. Experienced testers working with BAs or other business SMEs often discover scenarios to include in testing, even if they didn\u2019t consider the scenarios in the original plan.<\/p>\n

When these tests uncover problems, they can be added to the automated test suites. These cases require consideration around creating the scenario, setting the environment, and defining the sequence of events to exercise the instance that brings the greatest value.<\/strong><\/p>\n

Using exploratory, experience-based testing to exercise these \u201cwhat-if\u201d scenarios often yields great benefits by revealing paths not covered by existing test scripts. By keeping a careful record of what you did, you have the basis for writing new automated scripts if the test results warrant it.<\/p>\n

Suggestions for Strengthening Testing<\/h2>\n

Making any form of testing meaningful and valuable to the organization requires thoughtful consideration. Think about all tests \u2013 manual and automated \u2013 and what information they provide or how you can combine similar tests. Consider the intent behind the tests, and look to see if they are delivering on that intent.<\/p>\n

Review tests regularly to make sure they remain relevant, to see if newer tests provide similar information and learn if these might be better than the older ones.<\/strong> Compare any newly created test against existing tests for overlap or redundancy.<\/p>\n

Conclusion<\/h2>\n

Is automated software testing important? Yes, absolutely. It is invaluable to delivering software at a predictable cadence. Using the right tool for the purpose at hand is vital for you to have any level of confidence in the results.<\/strong><\/p>\n

Good, automated testing frees up your technical knowledge workers to consider other possible scenarios or paths you might need to explore, so you can make sure your software always delivers for your company.<\/p>\n","protected":false},"excerpt":{"rendered":"

Many organizations want the benefits from automated software testing. Most see the “next step” in testing as building automated versions of manual tests.<\/p>\n","protected":false},"author":361,"featured_media":33981,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_oasis_is_in_workflow":0,"_oasis_original":0,"_oasis_task_priority":"","_relevanssi_hide_post":"","_relevanssi_hide_content":"","_relevanssi_pin_for_all":"","_relevanssi_pin_keywords":"","_relevanssi_unpin_keywords":"","_relevanssi_related_keywords":"","_relevanssi_related_include_ids":"","_relevanssi_related_exclude_ids":"","_relevanssi_related_no_append":"","_relevanssi_related_not_related":"","_relevanssi_related_posts":"","_relevanssi_noindex_reason":"","footnotes":""},"categories":[1],"tags":[18560],"coauthors":[23651],"acf":[],"publishpress_future_action":{"enabled":false,"date":"2024-07-21 19:55:39","action":"change-status","newStatus":"draft","terms":[],"taxonomy":"category"},"_links":{"self":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts\/33976"}],"collection":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/users\/361"}],"replies":[{"embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/comments?post=33976"}],"version-history":[{"count":0,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts\/33976\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/media\/33981"}],"wp:attachment":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/media?parent=33976"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/categories?post=33976"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/tags?post=33976"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/coauthors?post=33976"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}