Wednesday, June 24, 2009

Download Eclipse 3.5 Galileo With Amazon S3 and CloudFront

One piece of exciting news today is that Eclipse 3.5 (Galileo) is available. But the best part is that you can download it via Amazon S3 and CloudFront. As of earlier today at least, the "old-fashioned" mirrors were dragging, but the S3 download was extremely snappy.

Unfortunately, the S3 link isn't very well-advertised on the download page, so I wanted to mention it here. When you get to the download page for the distro you want, look down the page for "Download from Amazon Web Services". I bet you'll be impressed.

Saturday, June 20, 2009

Mock Responses with Optional Elements in soapUI

SoapUI will probably always be one of my favorite development and testing tools. Recently I had the opportunity to explore its web service simulation and mocking capabilities.

I needed to create some fairly-detailed mock responses for a service where pretty much every element in schema for the response was optional. By default, soapUI does not add optional elements to the mock response XML. This was causing me a significant amount of pain. I noticed in the Preferences, WSDL Settings, that there was an "Include Optional" option that says "always include optional schema elements when creating requests": I discovered that this option not only applies to creating requests, but also to creating mock responses. Problem solved. But I couldn't find this documented anywhere, so I wanted to point it out.

Wednesday, June 17, 2009

An Example of Why You Shouldn't Test in Production

If you ever need a concrete example of why it is worthwhile to set up a proper testing environment for your online or data-based applications, consider this case. According to the Chicago Tribune, Illinois resident Tom Feddor, pround owner of license plate number of "0" has received some 170 bogus parking tickets because

A glitch occurred at the Chicago Department of Revenue involving Feddor's 0 plates being used during tests of ticketing equipment.

The article continues:

It turned out that some city parking-enforcement aides punched in 0 when testing their electronic ticket-issuing devices, Revenue Department spokesman Ed Walsh said. Officials weren't aware there was a 0 plate or that Feddor was receiving tickets, Walsh said in response to the Tribune inquiry.

But thank goodness:

"But we are taking steps to rectify the situation so in the future an actual registered plate number will not be used to do the testing," Walsh said.

Before you laugh too hard, be honest: How many of you have run "test" transactions through your live/production systems exactly in this manner? And what precautions did you have in place?

Thursday, June 11, 2009

Why We Automate Tests

James Whittaker, who is now blogging at the Google Testing Blog, talked a bit about manual versus automated testing. First he implied that testers are unjustifiably infatuated with their test automation frameworks:
Too often we are solution focused and marry ourselves to our automation rather than the product we are testing. Whenever I see testers that seem more vested in their automation than in the product they are shipping I worry the pendulum has swung too far.
He also tried to relate this question back to good test design.
Let's shift the debate to good test design. Whether those tests ultimately end up being executed manually or via a tool is a moot point, let's just make them good ones that solve real testing problems.
I'm all for good test design and solving real testing problems, but how tests are run is not a moot point, and I'd like to give all of those "solution-focused" testers a little more credit than that. Let me start with some simple logic:
  1. Any series of steps that you perform more than once on a computer should be automated if at all possible.
  2. A test case is (at least in some sense) a series of steps that you perform on a computer.
  3. You want to run your test cases more than once (i.e. to perform regression testing).
  4. Therefore, you should automate your test cases.
Occasionally, I'll meet an engineer that seems satisfied to autonomically do the same things over and over, day after day, but for the most part, testers understand my line of reasoning better than most people.

If automating a test case was as simple as running that test case in the first place, there would be nothing more to say, and this post would be finished. Of course, that's not reality, and therein lies the rub. But I would argue that you're still better off automating far more often than not. Automation yields significant benefits to both cost and quality.

Imagine that you design a test case and execute it the first time for a cost of x. In my experience, if a test case is ever going to detect a bug, there is a >90% chance that will occur the first time you run it. But if it really is a good test case like James is arguing for, you want to run it again and again, ideally with each new build of the software to look for regressions and related bugs.

Let's say that the cost to automate a test case is 10x what it takes to run it manually. If you only expect to run that test case five times over the life of the software, you should stick with manual testing. But if you expect to run it more than ten times over the life of the software, it is in your long-term financial interest to automate it.

Now let's say that you do expect to run the test case ten times, and you decide to forgo automation. If you want achieve the same quality assurances with your testing, you are committing to nine additional manual executions of that test. This is where quality risks start to creep into the process. How likely is it that you will really get all nine of those executions in given your busy schedule? How timely will they be? Will you perform them consistently and correctly, or will you make some mistakes and miss some bugs? Automation solves those problems.

Finally, what if you can build a tool that drives the cost differential between manual and automated testing from 10x down to 5x? And you can leverage that tool over the long-term not for one but rather n test cases? Granted, I am ignoring a couple of important factors, like the cost of automated-test-case maintenance and the magnitude of x relative to the release schedule. But I submit that the gist of the reasoning still holds. In many cases, we can build tools that work this well or better, maybe even for less than the 5x cost in this example. That is why testers work so hard at building test automation tools.