Thursday, June 11, 2009

Why We Automate Tests

James Whittaker, who is now blogging at the Google Testing Blog, talked a bit about manual versus automated testing. First he implied that testers are unjustifiably infatuated with their test automation frameworks:
Too often we are solution focused and marry ourselves to our automation rather than the product we are testing. Whenever I see testers that seem more vested in their automation than in the product they are shipping I worry the pendulum has swung too far.
He also tried to relate this question back to good test design.
Let's shift the debate to good test design. Whether those tests ultimately end up being executed manually or via a tool is a moot point, let's just make them good ones that solve real testing problems.
I'm all for good test design and solving real testing problems, but how tests are run is not a moot point, and I'd like to give all of those "solution-focused" testers a little more credit than that. Let me start with some simple logic:
  1. Any series of steps that you perform more than once on a computer should be automated if at all possible.
  2. A test case is (at least in some sense) a series of steps that you perform on a computer.
  3. You want to run your test cases more than once (i.e. to perform regression testing).
  4. Therefore, you should automate your test cases.
Occasionally, I'll meet an engineer that seems satisfied to autonomically do the same things over and over, day after day, but for the most part, testers understand my line of reasoning better than most people.

If automating a test case was as simple as running that test case in the first place, there would be nothing more to say, and this post would be finished. Of course, that's not reality, and therein lies the rub. But I would argue that you're still better off automating far more often than not. Automation yields significant benefits to both cost and quality.

Imagine that you design a test case and execute it the first time for a cost of x. In my experience, if a test case is ever going to detect a bug, there is a >90% chance that will occur the first time you run it. But if it really is a good test case like James is arguing for, you want to run it again and again, ideally with each new build of the software to look for regressions and related bugs.

Let's say that the cost to automate a test case is 10x what it takes to run it manually. If you only expect to run that test case five times over the life of the software, you should stick with manual testing. But if you expect to run it more than ten times over the life of the software, it is in your long-term financial interest to automate it.

Now let's say that you do expect to run the test case ten times, and you decide to forgo automation. If you want achieve the same quality assurances with your testing, you are committing to nine additional manual executions of that test. This is where quality risks start to creep into the process. How likely is it that you will really get all nine of those executions in given your busy schedule? How timely will they be? Will you perform them consistently and correctly, or will you make some mistakes and miss some bugs? Automation solves those problems.

Finally, what if you can build a tool that drives the cost differential between manual and automated testing from 10x down to 5x? And you can leverage that tool over the long-term not for one but rather n test cases? Granted, I am ignoring a couple of important factors, like the cost of automated-test-case maintenance and the magnitude of x relative to the release schedule. But I submit that the gist of the reasoning still holds. In many cases, we can build tools that work this well or better, maybe even for less than the 5x cost in this example. That is why testers work so hard at building test automation tools.

No comments: