Thursday, July 12, 2012

How much to test after a code change ?

Usually we do testing to ensure that everything is working correctly in the application/software. In the IT world, people deliver softwares in iterations with a new addition to software in each iteration. What is the impact of this new addition on the already running software. Where is the impact of the change? Is it only in the module that has changed or the change has had an effect on other modules as well.

The above questions can tell us how much do we need to test after a change. Change in requirements after the product is delivered is quite common nowadays. To implement those requirements in our system such that the complete system works correctly with the changes is a task that needs to be well done. In one of my recent projects we faced a lot of issues in a short span of time regarding the change in requirements. Changes made to one module had a vast impact. It impacted various modules that were not even related to the changed module. In such a case every time a change was deployed, we had to perform a complete regression testing of the system. That was time consuming but the only solution to ensure a correct system

Monday, July 9, 2012

Data corruption during testing

How often do find corrupt data in testing databases. Sometimes corrupt data results in various system failures that were not really the code errors or bugs. This is a challenge that testers may face while testing an application that has data dependency.
Generally more than one testers work on these applications simultaneously and as we all know we testers are too good at modifying data as per our needs. In the process we modify data, perform our testing and then forget about the data that we modified. Sounds so easy going and cute. But if someone else picks up that modified data and performs another operation, there is a possibility that the testing may fail. Then we are happy that we found a bug, we log a bug, and approach a developer like a master telling him that there is a major bug in the system. The developer will then debug the code for the bug and find that there is nothing wrong with the code, he again approach you and say that the code is just fine. Then how did the bug arise ?
The answer here is that the bug have been encountered due to data discrepancy in the database. The code would just run fine on the correct data but as the data is incorrect, the code is unable to determine the correct behavior that should be performed for such "discrepant" data
Data discrepancy is as serious issue as a major code error. Just imagine a newspaper saying "9000 people killed in bus accident" while it should say "9 people killed in a bus accident". The impact is huge. Its just like presenting an incorrect application to the client. It may even result in a very angry client or a client laughing like it is taunting you. The end result is EMBARRASSMENT!!!
There are solutions to prevent data discrepancy in a database.
1. Take a backup: Before modifying any data in the database take a backup of the database to ensure that if any woolly mammoth is encountered we can make it disappear by a database restore.
2. Revert back data manually: If you think database backup and restore are too lengthy process and there is not enough time for such activity, just revert back the data that you modified to its original state. This can be done using an UPDATE SQL query.
3. Division of data: Data can be divided among the number of testers working on the application. This helps as each member performs functionality on the data provided to them and hence keeps the data very much clean.
4. Minimal Use of Update and Insert: While using SQL, the use of update and insert should be minimal and should be used only if required. Don't just use update and insert just for fun in the testing database.
5. Ask before you Do: One should always ask a DBA or the senior database analyser for any changes being made to the testing database. This will ensure that the DBA is aware of the changes and can handle any data related discrepancy in the database.


Monday, July 2, 2012

Risk Based Testing


According to Webster’s New World Dictionary, risk is “the chance of injury, damage or loss; dangerous chance; hazard”.The objective of Risk Analysis is to identify potential problems that could affect the cost or outcome of the project. The objective of risk assessment is to take control over the potential problems before the problems control you, and remember: “prevention is always better than the cure”.


What is Risk Based Testing ?

Risk Based Testing includes following activities

1. Make a prioritized list of risks.
2. Perform testing that explores each risk.
3. As risks evaporate and new ones emerge, adjust your test effort to stay focused on the current crop.

Why do Risk Based Testing ?

Risk is a problem that might happen. The magnitude of a risk is a joint function of the likelihood and impact of the problem—the more likely the problem is to happen, and the more impact it will have if it happens, the higher the risk associated with that problem. Thus, testing is motivated by risk. Just because testing is motivated by risk does not mean that explicit accounting of risks is required in order to organize a test process. Standard approaches to testing are implicitly designed to address risks. You may manage those risks just fine by organizing the tests around functions, requirements, structural components, or even a set of predefined tests that never change. This is especially true if the risks you face are already well understood or the total risk is not too high.
If you are responsible for testing a product where the impact of failure is extremely high, you might want to use a rigorous form of risk analysis. Such methods apply statistical models and/or comprehensively analyze hazards and failure modes.

Risk Analysis  Activity Model

How to Identify Risk ?


The activity of identifying risk answers these questions:

·       Is there risk to this function or activity?
·       How can it be classified?

Risk identification involves collecting information about the project and classifying it to determine the amount of potential risk in the test phase and in production (in the future). The risk could be related to system complexity (i.e. embedded systems or distributed systems), new technology or methodology involved that could cause problems, limited business knowledge or poor design and code quality.

Strategy for Risks

Risk based strategizing and planning involves the identification and assessment of risks and the development of contingency plans for possible alternative project activity or the mitigation of all risks.  These plans are then used to direct the management of risks during the software testing activities.  It is therefore possible to define an appropriate level of testing per function based on the risk assessment of the function.  This approach also allows for additional testing to be defined for functions that are critical or are identified as high risk as a result of testing (due to poor design, quality, documentation, etc.).

Assessing Risks


Assessing risks means determining the effects (including costs) of potential risks. Risk assessments involves asking questions such as: Is this a risk or not?  How serious is the risk?  What are the consequences?  What is the likelihood of this risk happening?  Decisions are made based on the risk being assessed.  The decision(s) may be to mitigate, manage or ignore.
The important things to identify (and quantify) are:
 ·       What indicators can be used to predict the probability of a failure?
The important thing is to identify what is important to the quality of this function.  This may include design quality (e.g. how many change requests had to be raised), program size, complexity, programmers skills etc.
·       What are the consequences if this particular function fails?
Very often is it impossible to quantify this accurately, but the use of low-medium-high (1-2-3) may be good enough to rank the individual functions.

Prediction of Risks

Risk prediction is derived form the previous activities of identifying, planning, assessing, mitigating, and reporting risks. Risk prediction involves forecasting risks using the history and knowledge of previously identified risks. During test execution it is important to monitor the quality of each individual function (number of errors found), and to add additional testing or even reject the function and send it back to development if the quality is unacceptable.  This is an ongoing activity throughout the test phase.

Friday, June 15, 2012

Cloud Computing

Earlier systems comprised of layered architecture which consists of Hardware on which Operating system was installed and on this OS the application was installed for use. The major problem with this kind of architecture is that the application is dependent on the underlying layers which may lead to frequent failures. For Example, in a Email system if the email server fails then Windows NT (OS) and the Microsoft Exchange Server (application) will fail. Cloud Computing use virtual computing in order to disconnect the application from the operating system and the hardware.

{Virtual Computing is not same as cloud computing. Virtual Computing is a component of Cloud Computing}

Web Applications in Cloud Computing.

The most basic cloud computing applications include web applications. A classic example can be using the Google Docs. We can login to Google Docs with our Google ID and start creating documents. This means that we are accessing Office on a server from our computer. Now, if our system crashes, then we can be rest assured of the fact that our documents will be safe and we can again access our documents through some other computer.

Database Clustering

Web applications generally use database to store data. Under the clustering environment, more than one hardware is setup with the operating system and database software. Let’s say 4 different servers are installed with the operating system and the MySQL database and we connect the MySQL database on each of the systems such that any data changes made on anyone of the servers are replicated on to the other 3 servers. This forms a complete database cluster and data is accessed from the cluster instead of the individual database servers.
How This helps??


{Some Business depend completely on storage, therefore data cannot be ignored.}

  • Clustering helps in redirecting the web request to any of the server within the cluster. So if any of the server within the cluster is loaded with requests then the web request can be redirected to another server within the cluster. This process is also known as load balancing.
  • If any server within the cluster fails then any web request will not be directed to that server
  • Clustering provides quick response to a web request


Data and Its Importance


Two reasons – monumental data growth and the opportunities that this data brings to businesses.  Data, of course, has been growing steadily for the past 50 years, but the orders of magnitude today are simply staggering.  The creation, transmittal, processing, and storage of data have reached epic proportions.  In simple terms (assuming a 50% annual growth rate), this equates to a 58x data growth factor within this decade.  The new abundance of data has fueled an even greater thirst by users demanding more granularity and increased regularity of data flow.

It is becoming clear that we are reaching a collective inflection point where data growth either becomes an overwhelming burden to IT; or becomes a fuel to propel business innovation.  Successfully moving beyond the inflection point requires a new way of thinking, and a new data infrastructure that supports historic growth levels while containing costs and avoiding complexity.



What is Cloud Computing ?

Cloud computing is a general term for anything that involves delivering hosted services over the Internet. Cloud computing is a technology that uses the internet and central remote servers to maintain data and applications. Cloud computing allows consumers and businesses to use applications without installation and access their personal files at any computer with internet access. This technology allows for much more efficient computing by centralizing storage, memory, processing and bandwidth. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.


{In June 2011, a study conducted by VersionOne found that 91% of senior IT professionals actually don't know what cloud computing is and two-thirds of senior finance professionals are clear by the concept}

A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.
A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.
Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT's existing capabilities.

In the fall of 2011, Tata Consultancy Services (TCS) conducted an extensive study on how 600+ primarily large companies (most with more than $1 billion in revenue) were using applications in “the cloud” – software residing on remote data centers that organizations access via the Internet. Such data centers can be run by third parties that co-locate applications of multiple companies (so-called public clouds). Or these data centers can be run for the sole use of one organization, operated by that organization itself (private clouds).

From the analysis of the data from all three research streams, 10 findings were uncovered that explain how large companies around the world are using cloud applications, to what benefit, with what concerns, and with what future plans:

Finding No. 1Despite the hype, cloud applications do not rule the large corporation, although their usage is expected to increase significantly. Cloud applications are still in the minority of all applications in companies (19% of the average large U.S. company’s applications, 12% in Europe, 28% in Asia-Pacific, and 39% in Latin American companies). But they expect the ratio of cloud to on-premises applications to increase greatly by 2014.  The case of Australia’s largest bank, Commonwealth Bank of Australia, illustrates why many companies have gained a voracious appetite for cloud applications. (Read more)

Finding No. 2The biggest driver of cloud applications is not to cut IT costs.  IT cost reduction is an important factor, but not the most important. Rather, standardizing software applications and business processes across a company (in the U.S. and Asia-Pacific) and ramping systems up or down faster (in Europe and Latin America) are the most highly rated drivers for shifting on-premises applications to the cloud. And the factors driving companies to launch entirely new applications in the cloud are quite different – to institute new business processes and launch new technology-dependent products and services. The case of assessment testing company CTB/McGraw-Hill shows why cloud computing will become a key tool for delivering pioneering IT-enabled offerings. (Read more)

Finding No. 3The early returns on cloud applications are impressive. Companies using cloud applications are increasing the number of standard applications and business processes, reducing cycle times to ramp up IT resources, cutting IT costs, and launching a greater number of new products and processes. The story of a major telco shows the ambitions of the some of the most aggressive cloud adopters. (Read more)

Finding No. 4Customer-facing business functions are garnering the largest share of the cloud application budget.  Marketing, sales and service are capturing at least 40% of that budget in all four regions. The experiences of Dell’s enterprise sector online marketing function shows how one large company is trying to get closer to customers through cloud marketing applications.  And a new private cloud at Web media company AOL Inc. explains how a technology-dependent company can make its technology more responsive and cost-effective. (Read more)

Finding No. 5Many companies are reluctant to put applications with sensitive data in the cloud. In the U.S. and Europe, the applications least frequently shifted from on-premises computers to the cloud were those that compiled data on employees (e.g., payroll), legal issues (legal management systems), product (pricing and product testing), and certain customer information (e.g., customer loyalty and e-commerce transactions). Still, some companies had shifted applications with customer data to the cloud, especially in customer service, and many planned to shift a number of customer-related applications to the cloud by 2014. (Read more)

Finding No. 6The heaviest users of cloud applications are the companies that manufacture the technology hardware that enables cloud computing (computers/electronics/telecom equipment), while healthcare services providers are the lightest users (in terms of average number cloud apps per business function).  (Read more)

Finding No. 7The most aggressive adopters of cloud applications are companies in Asia-Pacific and Latin America. They report having much higher percentages of cloud apps to total apps – and bigger results from cloud apps than their peers in the U.S. and Europe. We show how a large consumer products company uses the cloud to respond rapidly and effectively to consumer issues around the world. (Read more)

Finding No. 8Despite a significant shift to cloud applications, most companies (especially in Europe) remain conservative about which applications they put in public clouds. Less than 20% of U.S. and European companies would consider or seriously consider putting their most critical applications in public clouds. But 66% of U.S. and 48% of European companies would consider putting core applications in private clouds. (Read more)

Finding No. 9The keys to adopting and benefiting from cloud applications are overcoming fear of security risks and skepticism about ROI. (Read more)

Finding No. 10Companies evaluate cloud vendors most on their security and reliability/uptime capabilities – and far less on their price. This was the case in all four regions. In fact, price typically finished at the bottom of a list of nine factors in making the cloud application purchasing decision. (Read more)