Thursday, July 12, 2012

How much to test after a code change ?

Usually we do testing to ensure that everything is working correctly in the application/software. In the IT world, people deliver softwares in iterations with a new addition to software in each iteration. What is the impact of this new addition on the already running software. Where is the impact of the change? Is it only in the module that has changed or the change has had an effect on other modules as well.

The above questions can tell us how much do we need to test after a change. Change in requirements after the product is delivered is quite common nowadays. To implement those requirements in our system such that the complete system works correctly with the changes is a task that needs to be well done. In one of my recent projects we faced a lot of issues in a short span of time regarding the change in requirements. Changes made to one module had a vast impact. It impacted various modules that were not even related to the changed module. In such a case every time a change was deployed, we had to perform a complete regression testing of the system. That was time consuming but the only solution to ensure a correct system

Monday, July 9, 2012

Data corruption during testing

How often do find corrupt data in testing databases. Sometimes corrupt data results in various system failures that were not really the code errors or bugs. This is a challenge that testers may face while testing an application that has data dependency.
Generally more than one testers work on these applications simultaneously and as we all know we testers are too good at modifying data as per our needs. In the process we modify data, perform our testing and then forget about the data that we modified. Sounds so easy going and cute. But if someone else picks up that modified data and performs another operation, there is a possibility that the testing may fail. Then we are happy that we found a bug, we log a bug, and approach a developer like a master telling him that there is a major bug in the system. The developer will then debug the code for the bug and find that there is nothing wrong with the code, he again approach you and say that the code is just fine. Then how did the bug arise ?
The answer here is that the bug have been encountered due to data discrepancy in the database. The code would just run fine on the correct data but as the data is incorrect, the code is unable to determine the correct behavior that should be performed for such "discrepant" data
Data discrepancy is as serious issue as a major code error. Just imagine a newspaper saying "9000 people killed in bus accident" while it should say "9 people killed in a bus accident". The impact is huge. Its just like presenting an incorrect application to the client. It may even result in a very angry client or a client laughing like it is taunting you. The end result is EMBARRASSMENT!!!
There are solutions to prevent data discrepancy in a database.
1. Take a backup: Before modifying any data in the database take a backup of the database to ensure that if any woolly mammoth is encountered we can make it disappear by a database restore.
2. Revert back data manually: If you think database backup and restore are too lengthy process and there is not enough time for such activity, just revert back the data that you modified to its original state. This can be done using an UPDATE SQL query.
3. Division of data: Data can be divided among the number of testers working on the application. This helps as each member performs functionality on the data provided to them and hence keeps the data very much clean.
4. Minimal Use of Update and Insert: While using SQL, the use of update and insert should be minimal and should be used only if required. Don't just use update and insert just for fun in the testing database.
5. Ask before you Do: One should always ask a DBA or the senior database analyser for any changes being made to the testing database. This will ensure that the DBA is aware of the changes and can handle any data related discrepancy in the database.


Monday, July 2, 2012

Risk Based Testing


According to Webster’s New World Dictionary, risk is “the chance of injury, damage or loss; dangerous chance; hazard”.The objective of Risk Analysis is to identify potential problems that could affect the cost or outcome of the project. The objective of risk assessment is to take control over the potential problems before the problems control you, and remember: “prevention is always better than the cure”.


What is Risk Based Testing ?

Risk Based Testing includes following activities

1. Make a prioritized list of risks.
2. Perform testing that explores each risk.
3. As risks evaporate and new ones emerge, adjust your test effort to stay focused on the current crop.

Why do Risk Based Testing ?

Risk is a problem that might happen. The magnitude of a risk is a joint function of the likelihood and impact of the problem—the more likely the problem is to happen, and the more impact it will have if it happens, the higher the risk associated with that problem. Thus, testing is motivated by risk. Just because testing is motivated by risk does not mean that explicit accounting of risks is required in order to organize a test process. Standard approaches to testing are implicitly designed to address risks. You may manage those risks just fine by organizing the tests around functions, requirements, structural components, or even a set of predefined tests that never change. This is especially true if the risks you face are already well understood or the total risk is not too high.
If you are responsible for testing a product where the impact of failure is extremely high, you might want to use a rigorous form of risk analysis. Such methods apply statistical models and/or comprehensively analyze hazards and failure modes.

Risk Analysis  Activity Model

How to Identify Risk ?


The activity of identifying risk answers these questions:

·       Is there risk to this function or activity?
·       How can it be classified?

Risk identification involves collecting information about the project and classifying it to determine the amount of potential risk in the test phase and in production (in the future). The risk could be related to system complexity (i.e. embedded systems or distributed systems), new technology or methodology involved that could cause problems, limited business knowledge or poor design and code quality.

Strategy for Risks

Risk based strategizing and planning involves the identification and assessment of risks and the development of contingency plans for possible alternative project activity or the mitigation of all risks.  These plans are then used to direct the management of risks during the software testing activities.  It is therefore possible to define an appropriate level of testing per function based on the risk assessment of the function.  This approach also allows for additional testing to be defined for functions that are critical or are identified as high risk as a result of testing (due to poor design, quality, documentation, etc.).

Assessing Risks


Assessing risks means determining the effects (including costs) of potential risks. Risk assessments involves asking questions such as: Is this a risk or not?  How serious is the risk?  What are the consequences?  What is the likelihood of this risk happening?  Decisions are made based on the risk being assessed.  The decision(s) may be to mitigate, manage or ignore.
The important things to identify (and quantify) are:
 ·       What indicators can be used to predict the probability of a failure?
The important thing is to identify what is important to the quality of this function.  This may include design quality (e.g. how many change requests had to be raised), program size, complexity, programmers skills etc.
·       What are the consequences if this particular function fails?
Very often is it impossible to quantify this accurately, but the use of low-medium-high (1-2-3) may be good enough to rank the individual functions.

Prediction of Risks

Risk prediction is derived form the previous activities of identifying, planning, assessing, mitigating, and reporting risks. Risk prediction involves forecasting risks using the history and knowledge of previously identified risks. During test execution it is important to monitor the quality of each individual function (number of errors found), and to add additional testing or even reject the function and send it back to development if the quality is unacceptable.  This is an ongoing activity throughout the test phase.