Monday, March 1, 2010

When to stop testing?

When to stop testing?



Testing is potentially endless. We can not test till all the defects are unearthed and removed -- it is simply impossible.

At some point, we have to stop testing and ship the software. The question is when.



"When to stop testing" is one of the most difficult questions to a test engineer.



Common factors in deciding when to stop are:



· Deadlines ( release deadlines,testing deadlines.)

· Test cases completed with certain percentages passed

· Test budget depleted

· Coverage of code/functionality/requirements reaches a specified point

· The rate at which Bugs can be found is too small

· Beta or Alpha Testing period ends

· The risk in the project is under acceptable limit.





Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with the testing done.



The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

· Measuring Test Coverage.

· Number of test cycles.

· Number of high priority bugs.



Realistically, testing is a trade-off between budget, time and quality.

It is driven by profit models.

The pessimistic, and unfortunately most often used approach is to stop testing whenever some, or any of the allocated resources -- time, budget, or test cases -- are exhausted.



The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost. This will usually require the use of reliability models to evaluate and predict reliability of the software under test. Each evaluation requires repeated running of the following cycle: failure data gathering -- modeling -- prediction. This method does not fit well for ultra-dependable systems, however, because the real field failure data will take too long to accumulate.



Statistical Testing

This can be made as a release criteria and can be used to arrive at the conditions to stop software testing.



The concept of statistical testing was invented by the late Harlan Mills (IBMFellow who invented Clean Room software engineering). The central idea is to usesoftware testing as a means to assess the reliability of software as opposed to a debugging mechanism. Statistical testing needs to exercise the software along an operational profile and then measure interfailure times that are then used to estimate its reliability. A good development process should yield an increasing mean time between failure every time a bug is fixed.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.