Who Made the Mistake?
A woman appeared on the Today Show in 1999 with a tax problem. The IRS sent her a letter indicating that she was delinquent in paying her federal income tax and they wanted their money now. The letter stated that she owed them $270 billion. Not surprisingly, she was more than a little dismayed when she saw the amount and called the IRS immediately. She explained to the agent that took her call that she did not owe them any money. The agent must have been in a hurry and assumed it was just another one of those "I don't owe you anything calls!" so he just pushed the button that generates another form letter. This letter instructed the poor taxpayer that she did not have to pay the $270 billion all at once. The government would be happy to have her pay in three equal monthly installments of $90 billion each.
The appearance of the distressed taxpayer on television was probably not received by the IRS with the humor it deserved. It is quite possible that someone attempted to find the root cause of the problem, but since there are no reports concerning the follow-up we can only surmise the course of events that followed.
Let's assume that there were some upset people at the IRS and they decided to find out who caused the problem that embarrassed them on television. They quickly determined that the source of the problem had to be one of three people: the business analyst who wrote the specification, the developer who programmed it, or the quality assurance analyst who tested it.
In your opinion, who made the mistake? Or possibly, who made the biggest mistake? The analysis typically follows a very predictable path. Clearly it is the tester who has failed because the tester had the last chance to find the problem before the software went into production. Another argument is that the spec writer is most culpable for failing to note in the spec that there was an upper limit to the amount of money that could be demanded of a taxpayer. Lastly, there are some people who maintain that the programmer should have applied common sense to the problem and noted that there should have been an upper limit to the field. The consensus is frequently that they all failed together.
But while it is difficult to identify the "guilty" party, the solution is equally unclear. Should the program kick out amounts over some predefined limit, or should there be a warning message of some sort? Some mechanism should exist to identify tax demands that are unusually high. One possibility might be a policy of comparing tax demands to a table entry to determine when letters should be reviewed prior to mailing. The amount stored in the table could be adjusted periodically to keep reviews to a manageable level.
Other approaches such as looking at the taxpayer history, reviewing the largest demands from each batch, etc. may also be useful solutions.
But there is another perspective... How much testing is appropriate for this type of application? What harm was done in sending out the ridiculous letter? How many taxpayers will be effected by the application's failure to detect unusual amounts? What would it have cost the IRS in time or money to continue testing the program? Did this program fail, or was it a prior program with a responsibility to purify the data?
To prioritize a test and to determine the value of performing a test, we have to ask a very basic question:
What is the risk inherent in failure?
We should be very practical when determining how much testing must be performed and be very clear about our stopping rules for testing.
Stopping rules may include:
- How much time did it take to find the last defect?
- What was the cost of finding the last defect?
- Does the system appear to work in a trial mode?
- Have passes been logged for tests in the system test plan?
- How comprehensive was the unit testing process?
Take a look at the reported bugs. While they may be embarrassing, they may not always warrant significant changes in the way we test.
About Us
A leader in live technical training since 1978
For many years New Instruction, LLC had been known as an innovative provider of training, consulting and software development services, and clients have often asked us to share our software quality methodologies with them. Those requests led to the development of our longest running workshop, "Testing and Quality Assurance Techniques", now in it's 11th edition.
Read more
Additional Links
Drop Us a Line
615 Valley Road
Montclair, NJ 07043
(973) 746-7010