Technical Info on .NET and Java, Software Tool and Book Recommendations, and Thoughts on Software Development in General from Scott McMaster.
Wednesday, June 24, 2009
Download Eclipse 3.5 Galileo With Amazon S3 and CloudFront
Unfortunately, the S3 link isn't very well-advertised on the download page, so I wanted to mention it here. When you get to the download page for the distro you want, look down the page for "Download from Amazon Web Services". I bet you'll be impressed.
Saturday, June 20, 2009
Mock Responses with Optional Elements in soapUI
I needed to create some fairly-detailed mock responses for a service where pretty much every element in schema for the response was optional. By default, soapUI does not add optional elements to the mock response XML. This was causing me a significant amount of pain. I noticed in the Preferences, WSDL Settings, that there was an "Include Optional" option that says "always include optional schema elements when creating requests":

Wednesday, June 17, 2009
An Example of Why You Shouldn't Test in Production
The article continues:A glitch occurred at the Chicago Department of Revenue involving Feddor's 0 plates being used during tests of ticketing equipment.
It turned out that some city parking-enforcement aides punched in 0 when testing their electronic ticket-issuing devices, Revenue Department spokesman Ed Walsh said. Officials weren't aware there was a 0 plate or that Feddor was receiving tickets, Walsh said in response to the Tribune inquiry.
But thank goodness:
"But we are taking steps to rectify the situation so in the future an actual registered plate number will not be used to do the testing," Walsh said.
Before you laugh too hard, be honest: How many of you have run "test" transactions through your live/production systems exactly in this manner? And what precautions did you have in place?
Thursday, June 11, 2009
Why We Automate Tests
Too often we are solution focused and marry ourselves to our automation rather than the product we are testing. Whenever I see testers that seem more vested in their automation than in the product they are shipping I worry the pendulum has swung too far.He also tried to relate this question back to good test design.
Let's shift the debate to good test design. Whether those tests ultimately end up being executed manually or via a tool is a moot point, let's just make them good ones that solve real testing problems.I'm all for good test design and solving real testing problems, but how tests are run is not a moot point, and I'd like to give all of those "solution-focused" testers a little more credit than that. Let me start with some simple logic:
- Any series of steps that you perform more than once on a computer should be automated if at all possible.
- A test case is (at least in some sense) a series of steps that you perform on a computer.
- You want to run your test cases more than once (i.e. to perform regression testing).
- Therefore, you should automate your test cases.
If automating a test case was as simple as running that test case in the first place, there would be nothing more to say, and this post would be finished. Of course, that's not reality, and therein lies the rub. But I would argue that you're still better off automating far more often than not. Automation yields significant benefits to both cost and quality.
Imagine that you design a test case and execute it the first time for a cost of x. In my experience, if a test case is ever going to detect a bug, there is a >90% chance that will occur the first time you run it. But if it really is a good test case like James is arguing for, you want to run it again and again, ideally with each new build of the software to look for regressions and related bugs.
Let's say that the cost to automate a test case is 10x what it takes to run it manually. If you only expect to run that test case five times over the life of the software, you should stick with manual testing. But if you expect to run it more than ten times over the life of the software, it is in your long-term financial interest to automate it.
Now let's say that you do expect to run the test case ten times, and you decide to forgo automation. If you want achieve the same quality assurances with your testing, you are committing to nine additional manual executions of that test. This is where quality risks start to creep into the process. How likely is it that you will really get all nine of those executions in given your busy schedule? How timely will they be? Will you perform them consistently and correctly, or will you make some mistakes and miss some bugs? Automation solves those problems.
Finally, what if you can build a tool that drives the cost differential between manual and automated testing from 10x down to 5x? And you can leverage that tool over the long-term not for one but rather n test cases? Granted, I am ignoring a couple of important factors, like the cost of automated-test-case maintenance and the magnitude of x relative to the release schedule. But I submit that the gist of the reasoning still holds. In many cases, we can build tools that work this well or better, maybe even for less than the 5x cost in this example. That is why testers work so hard at building test automation tools.