Monday, May 30, 2011

TEST DESIGN TECHNIQUES


The three categories are :
1) Specification based or Black box testing.
2) Structured-based or White-Box Techniques.
3) Experienced Based Techniques.

But here, we will discuss about Structured-based or White based testing technique. later will explain about other techniques.

TEST DESIGN TECHNIQUES for Structured-based (or) White-Box Techniques are:
  • Statement Testing Coverage
  • Decision Testing Coverage
1. Statement Testing & Coverage :
Statement is: 
'An entity in a programming language, which is typically the smallest indivisible unit of execution' (ISTQB Def).

Statement coverage is:
'The percentage of executable statements that has been exercised by a test suite' (ISTQB Def)

Statement coverage:
Does not ensure coverage of all functionality


The objective if the statement testing is to show that the executablestatements within a program have been executed at least once. An executable statement can be described as a line of program sourse code that will carry out some type of action. For example:


If all statements in a program have been executed by a set of tests then 100% statement coverage has been achieved  However, if only half of the statement have been executed by a set of tests then 50% statement coverage has been acheived.

The aim is to acheive the maximum amount of statement coverage with the mimimum number of test cases.
  • Statement testing test cases to execute specific statements, normally to increase statement coverage.
  • 100% statement coverage for a component is achieved by executing all of the executable statements in that component.
If we require to carry out statement testing, the amount of statement coverage required for the component should be stated in the test coverage requirements in the test plan.We should aim to achieve atleast the minimim coverage requirements with our test cases. If 100% statement coverage is not required, then we need to determine which ares of the component are more important to test by this method.

Consider the following lines of code:

Here, 1 test would be required to execute all three executable statements.

If our component consists of three lines of code we will execute all with one test case, thus acheiving 100% statement coverage. There is only one way we can execute the code - starting at line number 1 and finishing at line number 3.

Statement testing is more complicated when there is logic in the code
For example..

Here there is one executable statement i.e., "Display error message" hence 1 test is required to execute all executable statements.

Program code becomes tough when logic is introduced. It is likely what a component will have to carry out different actions depending upon circumstances at the time of execution. In the code examp,e shown, the component will do different things depending on whether the age input is less than 17 or if it is 17 and above. With the statement testing we have to determine the routes through the code we need to take in order to execute the statements and the input required to get us there!

In this example, the statement will be executed if the age is less than 17, so we would create a test case accordingly.

For more complex logic we could use control flow graphing. Control flow graphs consists of nodes, edges and regions

Control flow graphs describes the logic structure if the software programs - it is a method by which flows through the program logic are charted, usign the code itseld rather than the program specification. Each flow graph nodes and egdes The nodes represent computational statements or expressions, and the edges represent transfer of control between the nodes. Together the nodes and edges encompass an area known as a region.

In the diagram, the structure represents an 'If Then Else Endif' costurct. NOdes are shown for the 'If' and the 'Endif'. Edges are shown for the 'Then' ( the true path) and the 'Else ( the false path). The region is the area enclosed by the nodes and the edges.

All programs consists of these basic structures.

This is hetzel notation that only shows logic flow.

There are 4 basic structures that are used withn control-flow graphong.

The 'DoWhile' structure will execute a section of code whilst a feild or indicator is set to a certain value. For example,

The 'Do until' structure will execute a section of code until a field or indicator is set to a certain value. Foe example,

The evaluation of the condition occurs after the code is executed.

The 'Go To' structure will divert the program execution to the program section in question. For example 

SO the logic flow code could now be shown as follows:

If we applied control-flow graphing to our sample code, then 'if Then Else' structure is applicable.
However, while it shows us the structure of the code, it doesn`t show us where the executabel statements are, and so it doesn`t help us at the moment with determining the tests we required for statement coverage.

We can introduce extra nodes to indicate where the executable statements are 

And we can see the path we need to travel to execute the statemen in the code.

What we can do is introduce extra nodes to indicate where the statements occur in the program code.
Now in our example we can see that we need to answer 'yes' to the question being posed to traverse the code and execute the statement on line 2.

Now consider this code and control flow graph:

We will need 2 tests to acheive 100% statement coverage.

Program logic can be a lot more complicated than the examples I have given so far!
In the source code shown here, We have executable statements associated with each outcome of the question being asked. We have to dosplay an error message if the age is less than 17( answering 'yes' to the question), and we have display 'costomer OK' if we answer 'No'.
We can only traverse the code only once with a given test; therefore we require two tests to acheive 100% statement coverage.

Example:
We will need 3 tests to acheive 100% statement coverage.

Now it get even more complecated!
In this example, we have a supplementary question, or what is know as a 'nested if'. If we answer 'yes' to 'If fuel tank empty?' we then have a further question asked, and each outcome of this question has an associated statement.

Therefore we will need two tests that answer 'yes' to 'if fuel tank empty'

  • Fuel tank empty AND petrol engine ( to execute line 3)
  • Fuel tank empty AND NOT petrol engine( to execute line 5)
one further test will be required where we anser 'no' to 'if fuel tank empty' to enable us to execute the statement at line 8.

And this will be the last example for statement coverage.. we will then go for decision coverage.

Here, we will need 2 tests to acheive 100% statement coverage.

In this example,,, we ahve two saperate questions that are being asked.

The tests have shown are

  • A coffee drinker who wants cream
  • A non coffee drinker who doesn't want cream
Our 2 tests acheive 100% statement coverage, but equally we could have had 2 tests with:

  • A coffee drinker who doesn't want cream
  • A no-coffee drinker who wants cream
If we were being asked to acheive 100% statement coverage, and if all statements were of equal importance, it would n`t matter which set if tests we chooose.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Checking your calculation values:

Minimum tests required to acheive 100%

Decision coverage >= Statement coverage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Decision testing & Coverage
Decision is :
' A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.'(ISTQB Def)




Decision Coverageis :
' The percentage if the decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.' (ISTQB Def)

Decision Coverage :


The objective of decision coverage testing is to show all the decisions within a component have been executed at least once.
A decision can be described as a line of source code that asks a question.
For example:
If all decisions within a component have exercised by a given set of tests then 100% decision coverage has been achieved. However if only half of the decisions have been taken with a given set of tests then you have only achieved 50% decision coverage.
Again, as with statement testing, the aim is to achieve the maximum amount of coverage with the minimum number of tests.



  • Decision testing derives test cases to execute specific decision outcomes, normally to increase decision overage.
  • Decision testing is a form of control flow testing as it generates a specific flow of control through the decision points.
If we are required to carry out decision testing, the amount of decision coverage required for a component should be stated in the test requirements in the test plan.
We should aim to achieve atleast the minimum coverage requirements with our test cases. If 100% decision coverage is not required, then we need to determine which areas of the component are more important to test by this method.

  • Decision coverage is stronger than statement coverage.
  • 100% decision coverage for a component is achieved by exercising all decision outcomes int he component.
  • 100% decision coverage guarantees 100% statement coverage, but not vice versa.
Decision testing can be considered as the next logical progression from statement testing in that we are not much concerned with testing every statement but the true and false outcomes from every decision.
As we saw in our earlier examples of the statement testing, not every decision outcome has a statement( or statements) to execute.
If we achieve 100% decision coverage, we would have executed every outcome of every decision, regardless of whether there were associated statements or not..

Lets take earlier example we had for statement testing:
This would require 2 tests to achieve 100% decision coverage, but only 1 test to achieve 100% statement coverage.
In this example there is one decision, and therefore 2 outcomes.
To achieve 100% decision coverage we could have two tests:
  • Age less than 17(answer 'yes')
  • Age equal to or greater than 17 (answer 'no')
This is a greater number of tests than would be required for statement testing as statements are only associated with one decision outcome(line 2).
Again, consider this earlier example :

We will need 2 tests to achieve 100% decision coverage & also 2 tests to achieve 100% statement coverage.

This example would still result in two tests, as there is one decision therefore 2 outcomes to tests.
However, we would need two tests to achieve 100% statement coverage, as there are statements with each outcome of the decision.
So,in this instance, statement and decision testing would give us the same number of tests. NOte that if 100% coverage is required, statement testing can give us the same number of tests as decision testing, BUT NEVER MORE!

Lets look at some more examples now.. 

We will need 3 tests to achieve 100% decision coverage, but only 1 test to achieve 100% statement coverage.

Here we have an example of a supplimentary question, or a 'nested if'.
We have 2 decisions, so you may think that 4 tests may be required to achieve 100% decision coverage( two for each decision).
This is NOT the case! We can achieve 100% decision coverage with three tests - we need to exercise the 'Yes' outcome from the first decision ( line 1) twice, in order to subsequently exercise the 'Yes' and then the 'No' outcome from the supplementary question(line 2).
We need a further third test to ensure we exercise the 'No' outcome of the first decision( line 1 ).
There is only one decision outcome that has an associated statement - this means that 100% statement coverage can be achieved with one test.

As more statements are added, the tests for decision coverage are the same:

3 tests to achieve 100% decision coverage, and 2 tests to achieve 100% statement coverage.

We have now introduced a statement that is associated with 'No' outcome of the decision on line 2.
This change affects the number of tests required to achieve 100% statement coverage, but does NOT alter the number of tests required to achieved 100% decision coverage - it is still three!

Example:
3 tests to achieved both decision and statement coverage.

Finally, we have statements associated with each outcome of each decision - the number of tests to achieve 100% statement coverage and 100% decision coverage are now the same.

Example:
We will need 2 tests to achieve 100% decision coverage and 100% statement coverage.

We looked at this example of the "if Then Else' structure when considering statement testing.
As the decisions are separate questions we only need two tests to achieve 100% decision coverage( the same as the number required for statement coverage).
You may have thought that four tests were required - exercising the four different routes through the code, but remember, with decision testing our concern is to exercise each outcome of each decision atleast once - as long as we have answered 'Yes' and 'No' to each decision we have satisfied the requirements of the techinique.


The tests we have illustrated would need the following input conditions:
  • Coffee drinker wanting cream.
  • Non Coffee drinker not wanting cream ( but milk).
Equally, we could have chosen the following input conditions:

  • Coffee drinker not wanting cream( but milk).
  • Non coffee drinker wanting cream.
Then What about loops?
If we choose an initial value of p=4, we only need 1 test to achieve 100% statement and 100% decision coverage.

The control-flow graphs we showed earlier depicted a 'Do While' construct.
To reiterate, thw 'Do While' structure will execute a section of code whist a field or indicator is set to a certain value. For example,

The evaluation of the condition occurs before the code is executed.

Unlike the 'If Then Else', we can loop around the 'Do While' structure, which means that we exercise different routes through the code with one test.


As in the above diagram, if we set 'p' with an initial value '4', the first time through the code will :
  • Go from line 1 to line 2
  • Answer 'Yes' to the 'If' on line 2 ( if p<5)
  • Execute the statement in line 3 (p=p*2, so p now equals 8)
  • Go from line 3, through line 4 to line 5
  • Execute the statement on line 5 ( which adds 1 to 'p', making it`s value '9')
  • Execute the statement on line 6, which takes it back up to line 1.
Again we execute the code, with the value of 'P' now '9'

  • GO from line1 to line2 
  • Answer 'NO' to the 'if' on line 2 (If p>5)
  • Go from line 4 to line 5
  • Execute the statement on line 5( which adds 1 to 'p', making it`s value '10')
  • Execute the statement on line 6, which takes it back up to line 1.
Once more we execute the code

  • Line 1 - 'P' is not less than '10' ( it is equal to 10), therefore, we exit this structure.

1 test - it achieves 100% statement coverage and 100% decision coverage.

And it`s same for 'Do until' structure

IF we choose an initial value of A =15, we only need 1 test to achieve 100% Decision coverage and 100% statement coverage.

The control flow structures we showed earlier also depicted a 'Do Until' structure.
To reiterate, the 'Do Until' structure will execute a section of code until a field or indicator is set to a certain value. For example,

The evaluation of the condition occurs after the code is executed.
Unlike the 'If Then Else', we can loop around the 'Do Until' structure, which means that we exercise different routes through the code with one test.


In the example above, If we set 'A' with an initial value of '15', the first time through the code will:
  • Go from line 1 to line 2
  • Answer 'Yes' to the 'If' on line 2 (If A<20)
  • Execute the statement on line 3 ( A=A*2,which makes A=30)
  • GO from line 3, through the line 4 to line 5
  • Execute the statement on line 5( which adds 1 to 'A', making its value '31'.
  • Execute the statement on line 6, Which takes back to line 1
Again we execute the code, with the value of 'A' now '31'

  • Go from line 1 to line 2 
  • Answer 'No' to the 'If' on line 2 (If A < 20)
  • Go from line 2, through line 4 to line 5
  • Execute the statement on line 5( which adds 1 to 'A', making it`s value '32')
  • Execute the statement on line 6, which exits the structure('A' is greater than 31)
1 test - it achieves 100% statement coverage and 100% decision coverage.

END...

Friday, May 20, 2011

The Workbench

In order to understand testing methodology we need to understand the workbench concept. A Workbench is a  way of documenting how a specific activity has to be performed. A workbench is referred to as phases, steps,  and tasks as shown in the following figure.
Workbench with phases and steps
There are five tasks for every workbench:
Input: Every task needs some defined input and entrance criteria. So for every workbench we need defined  inputs. Input forms the first steps of the workbench.
Execute: This is the main task of the workbench which will transform the input into the expected output.  
Check: Check steps assure that the output after execution meets the desired result. 
Production output: If the check is right the production output forms the exit criteria of the workbench. 
Rework: During the check step if the output is not as desired then we need to again start from the execute  step. 

The following figure shows all the steps required for a workbench.

Phases in a workbench

In real scenarios projects are not made of one workbench but of many connected workbenches. A  workbench gives you a way to perform any kind of task with proper testing. You can visualize every software  phase as a workbench with execute and check steps. The most important point to note is we  visualize any task as a workbench by default we have the check part in the task.

Every software phase can be visualized as a workbench. Let’s discuss the workbench concept in detail:
Requirement phase workbench: The input is the customer’s requirements. we execute the task of writing a  requirement document, we check if the requirement document addresses all the customer needs, and the  output is the requirement document. 
Design phase workbench: The input is the requirement document, we execute the task of preparing a  technical document; review/check is done to see if the design document is technically correct and addresses  all the requirements mentioned in the requirement document, and the output is the technical document. 
Execution phase workbench: This is the actual execution of the project. The input is the technical  document; the execution is nothing but implementation/coding according to the technical document, and the  output of this phase is the implementation/source code. 
Testing phase workbench: This is the testing phase of the project. The input is the source code which needs to be tested; the execution is executing the test case and the output is the test results. Deployment phase workbench: This is the deployment phase. There are two inputs for this phase: one is the source code which  needs to be deployed and that is dependent on the test results. The output of this project is that the customer gets the product which he can now start using. 
Maintenance phase workbench: The input to this phase is the deployment results, execution is  implementing change requests from the end customer, the check part is nothing but running regression testing  after every change request implementation, and the output is a new release after every change request execution.

Monday, May 16, 2011

Testing Principles

Terms
Exhaustive testing
Principles
A number of testing principles have been suggested over the past 40 years and offer general guidelines common for all testing.

Principle 1 – Testing shows presence of defects 
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.

Principle 2 – Exhaustive testing is impossibleTesting everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead  of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.

Principle 3 – Early testing
To find defects early, testing activities shall be started as early as possible in the software or system development life cycle, and shall be focused on defined objectives.

Principle 4 – Defect clustering
Testing effort shall be focused proportionally to the expected and later observed defect density of modules. A  small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures.

Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any  new defects. To overcome this “pesticide paradox”, test cases need to be regularly reviewed and  revised, and new and different tests need to be written to exercise different parts of the software or system to  find potentially more defects.
Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from  an e-commerce site.

Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and  expectations.

Wednesday, May 11, 2011

Testing Techniques

What is Semantic Testing?

Semantic testing is a test technique to test the relationship between data as a goal. The relationship can exist in three different ways:
  • Relationship between input data on 1 screen
  • Relationship between input data on different screens
  • Relationship between input data on data already existing in the system
The outcome of this test technique might break that a relationship is missing or that a relationship was incorrectly implemented.

How to create a semantic test?

Identify the relationships to check

Search the test basis for relationships. Test bases containing useful information for this technique are for instance: data model, graphical user interface specifications (screen descriptions), functional requirement specifications.

These are examples of data relationships in a functional requirement specification document for travel booking system:

  • The user cannot check out if his shopping basket is empty
  • If the user orders at least 2 item on the "product overview" screen, he/she can enter a reduction code on the "check out" screen.
  • The delivery date can't be in the past (relationship between delivery date and system date)
  • The user has only access if his/her personal data is known by the system (relationship between sign-in name and information in the database)

Develop the relationships to check

Write down the relationships in a simple structure:
IF A
THEN B
ELSE C
Example:
If the user orders at least 2 item on the "product overview" screen, he/she can enter a reduction code on the "check out" screen. Reduction codes can only be used on Sunday. On other days a message is shown to promote shopping on Sundays.
IF day = Sunday
THEN
IF items ordered >= 2
THEN reduction code field is available
ELSE reduction code field is not available
ELSE Sunday shopping promotion message

Create test cases

Every line where a THEN or an ELSE statement is placed but not IF statement, forms an end-point of a test pathway. For above example, we can find back three pathways:
IF day = Sunday
THEN
IF items ordered >= 2
THEN reduction code field is available (-> pathway 1)
ELSE reduction code field is not available (-> pathway 2)
ELSE Sunday shopping promotion message (-> pathway 3)
For each pathway a test case can be created:

Test case 1: on Sunday order 5 items
Expected result: reduction code is available

Test case 2: on Sunday order 1 item
Expected result: reduction code is not available

Test case 3: on Thursday order 5 items
Expected result: reduction code is not available

Where to apply semantic testing?

Semantic testing is a black-box test technique which is useful at System testing level and Acceptance testing level. An authentication procedure for instance is a typical example requiring semantic testing.

Friday, May 6, 2011

Severity and Priority

Severity is about the risk a bug poses if it gets out into the wild.
Software risk can be characterized as: The potential for some fault, failure or other unintended happening in the implemented system to cause harm or loss to one or more persons or organizations.

We assess the risk of a bug by asking questions about impact and probability. How much harm could this bug cause to something the customer cares about, such as human safety or the bottom line? How likely is this bug to manifest (noticeable) and how likely is that harm to occur if it does?
If a bug could cause significant harm but only manifests under very unlikely circumstances, then we might decide it’s less severe than a bug that could cause less harm but manifests frequently. Or not, depending on the context.

Priority is related to Business aspect:
  1. Must fix as soon as possible.  Bug is blocking further progress in this area.
  2. Should fix soon, before product release.
  3. 3. Fix if time; somewhat trivial. May be postponed
 Severity is related to Technical aspects:
  1. Bug causes system crash or data loss.
  2. Bug causes major functionality or other severe problems; product crashes in obscure cases.
  3. Bug causes minor functionality problems, may affect "fit and finish".
  4. Bug contains typos, unclear wording or error messages in low visibility fields.
Most organizations have standard criteria for classifying bug severity, such as:

Severity 1 – Catastrophic bug or showstopper. Causes system crash, data corruption, irreparable harm, etc.

Severity 2 – Critical bug in important function. No reasonable workaround.

Severity 3 – Major bugs but has viable workaround.

Severity 4 – Minor bug with trivial impact.

Typically, Severity 1 and 2 bugs must be fixed before release, where 3’s and 4’s might not be, depending on how many we have and on plans for their subsequent disposition.

Priority, on the other hand, is about the order in which bugs need to be fixed. Often, priority and severity run hand-in-hand: a bug is both high severity and high priority to fix. But that’s not always true. Occasionally in testing we’d like to have lower-severity bugs fixed so we can explore an area more thoroughly and see if they’re masking something else. Particularly on large projects, we can also find that we have a larger number of high severity bugs open than the programmers can readily fix. In this case, we need to specify the order for fixes based on where we plan to test next. To some extent also, ease of programming kicks in. A programmer working on high severity bugs in a particular module may choose to fix the low severity bugs in the same session.

At least theoretically, bug severity doesn’t change. The potential for business or technical impacts stays pretty much the same throughout the development project. (The passage of time can affect risk, but that’s a subject for another post.)

Priorities for fixing bugs do change depending on where we are in the project. At first, it’s testing priorities that matter most. But the closer we get to release, the more important the customer’s priorities become, to the point where they take over entirely.

And that brings us to an essential question about both severity and priority. Who gets to decide?

Ultimately, it’s the customer’s prerogative to decide both severity and priority (using “customer” as the stand-in here for “key decision-makers”). We—testers, project managers, and programmers alike—can make educated guesses about business risk and even about business tolerance for risk, but we can’t really know and we certainly can’t decide for the customer. Similarly, we can’t decide bug fix order for the customer, who frequently has different priorities from ours.

That’s not to say we abdicate responsibility. It’s a tester’s job to try and represent the customer’s point of view when they are absent. It’s also our job to help customers make those decisions. We do that by attempting to understand the true significance of a bug and communicating our understanding. We also ask questions to help our customers assess relative risk. (As anyone who has ever supported UAT knows, customers frequently assume everything is equally high risk until we ask those questions.)

It’s good to engage with customers and ask those questions early, ideally as we go through the development project. (Agile projects have it easy in this regard.) Early engagement is especially important for assigning severity.

But until we reach the point in the project of determining bug fix priorities for release, it’s only practical for the entire development team to set priorities according to what’s needed to move the project forward. Most often that means the testers’ needs: the bug fix priorities for testing.

Of course, there are other factors affecting priority during a development project, including the relative risk and cost of fixing a bug. But generally speaking, it makes sense for testers to drive priority until we switch over to the customer’s priorities for release.

High Severity bugs examples
Severity: seriousness of the defect with respect to functionality

Most applications generally have a "Help" menu. "About" is a menu item under "Help" menu which provides details about the application, its version etc. So, if clicking on "About" menu item in an application crashes the application, it can be considered a bug with high severity and low priority. High severity because the moment this link is clicked, application crashes and users can't work on it. In a way, it's a show stopper bug. However, the chances of users clicking on this
link are very low so this bug can be fixed later and is not really high priority. A high priority bug would be one
which impacts the functioning/testing of application and needs immediate fix.

1. Interface bugs[low severity]user
    Eg1:Spelling mistake(High priority)
    Eg2:Improper alignment(Low priority)

2. Input domain bugs:[medium severity]
    Eg1:Doesnot allow valid values(high priority)
    Eg2:allows invalid type also(low priority)

3. Race condition bugs[High severity]
    Eg1:Deadlock(High priority)
    Eg2:improper order or services(Low priority)

Category                 Severity              Priority
Interface bugs              low                    High           e.g.  Spelling mistake
                                                           Low            e.g.  improper alignment       
                           
Input domain bugs       Medium              High        e.g.   Doesn’t allow values
                                                          Low       e.g.   Allow invalid type also                       

Race conditions bugs’    high                 High          e.g.   Deadlock
                                                           Low          e.g.   Improper order or service 

Testing Models


There are various models which have been presented in the past 20 years in the field of Software Engineering for Development and Testing. Let us discuss and explore into few of the famous models. 

The following models are addressed: 
  • Waterfall Model
  • Spiral Model
  • 'V' Model
  • 'W' Model, and 

The Waterfall Model 
This is one of the first models of software development, presented by B.W.Boehm. The Waterfall model is a step-by-step method of achieving tasks. Using this model, one can get on to the next phase of development activity only after completing the current phase. Also one can go back only to the immediate previous phase.
In Waterfall Model each phase of the development activity is followed by the Verification and Validation activities. One phase is completed with the testing activities, then the team proceeds to the next phase. At any point of time, we can move only one step to the immediate previous phase. For example, one cannot move from the Testing phase to the Design phase. 
Waterfall Model 

Spiral Model 
In the Spiral Model, a cyclical and prototyping view of software development is shown. Test are explicitly mentioned (risk analysis, validation of requirements and of the development) and the test phase is divided into stages. The test activities include module, integration and acceptance tests. However, in this model the testing also follows the coding. The exception to this is that the test plan should be constructed after the design of the system. The spiral model also identifies no activities associated with the removal of defects. 


'V' Model

Many of the process models currently used can be more generally connected by the 'V' model where the 'V' describes the graphical arrangement of the individual phases. The 'V' is also a synonym for Verification and Validation.
By the ordering of activities in time sequence and with abstraction levels the connection between development and test activities becomes clear. Oppositely laying activities complement one another (i.e.) server as a base for test activities. For example, the system test is carried out on the basis of the results specification phase. 

The 'W' Model
From the testing point of view, all of the models are deficient in various ways:
  • The Test activities first start after the implementation. The connection between the various test stages and the basis for the test is not clear.
  • The tight link between test, debug and change tasks during the test phase is not clear.

Why 'W' Model? 
In the models presented above, there usually appears an unattractive task to be carried out after coding. In order to place testing on an equal footing, a second 'V' dedicated to testing is integrated into the model. Both 'V's put together give the 'W' of the 'W-Model'. 



Tuesday, May 3, 2011

Traceability Matrix

A traceability matrix is a document, usually in the form of a table, that correlates any two baselined documents that require a many to many relationship to determine the completeness of the relationship, just like to map the relationship between Test Requirements and Test Cases. From  Traceability Matrix, we can check that which requirements are covered in which test cases and "particular test case covers which requirements".
In this matrix, the rows will have the requirements. For every document {HLD, LLD etc}, there will be a separate column. So, in every cell, we need to state, what section in HLD addresses a particular requirement. Ideally, if every requirement is addressed in every single document, all the individual cells must have valid section ids or names filled in. Then we know that every requirement is addressed. In case of any missing of requirement, we need to go back to the document and correct it, so that it addressed the requirement.
In a nutshell, requirements traceability is the process of ensuring that one or more test cases address each requirement.  
                                      
Example of a Traceability Matrix document:
 Req ID  Req Description  TC001  TC002  TC003
 R1.1  ……… Yes   Yes
 R1.2  ………. Yes    
 R2.1  …….   Yes  

Example of a Traceability Matrix document:
 Req ID  Req Description  TC001  TC002  TC003
 R1.1  ……… Yes   Yes
 R1.2  ………. Yes    
 R2.1  …….   Yes  

Above table shows –
Requirement R1.1 is covered in TC001 and TC003.
R1.2 is covered in TC001.
R2.1 is covered in TC002
Above table also provides the test coverage. From Traceability Matrix document, we can ensure that all the requirements are addressed in the test cases. 
Disadvantages of not using Traceability Matrix:
  1. Poor or unknown test coverage, more defects found in production.
  2. It will lead to miss some bugs in earlier test cycles which may arise in later test cycles. Then a lot of discussions arguments with other teams and managers before release.
  3. Difficult project planning and tracking, misunderstandings between different teams over project dependencies, delays, etc
Benefits of using Traceability Matrix: 
  1. Make obvious to the client that the software is being developed as per the requirements.
  2. To make sure that all requirements included in the test cases
  3. To make sure that developers are not creating features that no one has requested
  4. Easy to identify the missing functionalities.
  5. If there is a change request for a requirement, then we can easily find out which test cases need to update.
  6. The completed system may have “Extra” functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.