Testing Techniques,Test Case Design Types,Types Of Testing

Software Testingis the Process of Systematically Runninga Software System Or Program To uncover errors or defects

Software Testing in Organizations aiming for SEI Level 3 +:

•An acknowledged competitive advantage contributing to business success

•Moved from ad hoc to an engineering discipline

•Carrier for faster product delivery, with version controlled test plans and test cases

•Led to reliability prediction through defect metrics

•Focussed on defect elimination at the stage of defect induction itself

Testing Phases:

–Unit Testing

–Integration Testing

–System Testing

–Acceptance Testing

All Test Plans and Test Cases shall be Reviewed and Approved before Testing starts.

Unit Testing

Lowest-level component test

Key foundation for later levels of testing

Detects 65% - 75% of all bugs

Stand-alone test

Ensures the conformance to each unit in DDD

Integration Testing

An incremental series of tests of combinations or subassemblies of selected components in an overall system

Integration testing is incremental in that successively larger and more complex combinations of components are tested in sequence, proceeding from the unit level to eventually the full-integrated system

Ensures the conformance to each module in HLDD

System Testing

Highest level of application functionality testing performed by the systems group

Ensures the conformance to the Functional Requirements as specified in the SRS

Acceptance Testing

Independent test performed by the users or QA prior to accepting the delivered system

Ensures the conformance to the Functional Requirements as specified in the URD

Testing Activities

Test Case Design

Review of Test Case Design

Testing

Recording Testing Results

Review and Sign-Off of Testing Results

Defect Reporting

Testing Documents and Records

Documents (SCIs)

Test Plan for the Project

Test Cases Documents

Records

Test Report

Defect Log

Review Reports

Unit Test Plan (UTP)

•Plan for the entire Unit Testing in the project

•Identifies the Units/Sub-units covered during Unit Testing

•Each feature in the DDD mapped to a Unit Test Case ID

•To be prepared at DDD phase itself

•The test cases are designed and documented

•References to the Test Case document(s) shall be given in UTP

•Deliverables at Unit Testing Phase shall be identified in UTP

•Resources required shall be mentioned in Project Plan

•Schedules shall be planned

Testing coverage shall be identified in the UTP:

Path Coverage

Statement Coverage

Decision (Logic/Branch) Coverage

Condition Coverage

Decision/Condition Coverage

Multiple-Condition Coverage

Functionality testing

User interface testing

Regression testing

Unit Testing Coverage

Path Coverage

Test cases will be written to cover all the possible paths of control flow through the program.

Statement Coverage

Test cases will be written such that every statement in the program is executed at least once

Decision (Logic/Branch) Coverage

Test Cases will be written such that each decision has a true or false outcome at least once.

Condition Coverage

Test Cases will be written such that each condition in a decision takes on all possible outcomes at least once.

Decision/Condition Coverage

Test Cases will be written such that each condition in a decision takes on all possible outcomes at least once, each decision takes on all possible outcomes at least once, and each point of entry is invoked at least once.

Multiple-Condition Coverage

Test Cases will be written such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.

Integration Test Plan (ITP)

•Plan for the entire Integration Testing in the project

•Identifies the Modules/Sub-Modules covered during Integration Testing

•Each component of HLDD mapped to a Integration Test Case ID

•Recommended to prepare at HLD phase itself

•The test cases are designed and documented

•References to the Test Case document(s) shall be given in ITP

•Deliverables at Integration Testing Phase shall be identified in ITP

•Resources required shall be mentioned in Project Plan

•Schedules shall be planned

Testing coverage shall be identified in the ITP:

Functionality Testing

User interface testing

Dependency (API) testing

Smoke testing

Capacity and volume testing

Error / disaster handling and recovery

Concurrent execution testing

•Equivalence partitioning

•Boundary-value analysis

•Cause-effect graphing

•Error guessing

Equivalence Partitioning

The test cases are partitioned into equivalence classes such that

each test case should invoke as many different input conditions as possible in order to minimize the total number of test cases necessary

if one test case in an equivalence class detects an error, all other test cases in the equivalence class would be expected to find the same error

Boundary-value analysis

Test cases explore boundary conditions

Test cases consider input conditions as well as output conditions

Cause-effect graphing

Technique to identify test cases by translating the specifications into Boolean logic network.

This is a systematic method of generating test cases representing combinations of conditions

Error guessing

The test cases are written both by intuition and experience to uncover certain probable type of errors

The test case design is by the knack of “smelling out” errors

System Test Plan (STP)

•Plan for the entire System Testing in the project

•Identifies the features covered during System Testing

•Each feature in the SRS mapped to a System Test Case ID

•Recommended to prepare at SRS phase itself

•The test cases are designed and documented

•References to the Test Case document(s) shall be given in STP

•Deliverables at System Testing Phase shall be identified in STP

•Resources required shall be mentioned in Project Plan

•Schedules shall be planned

Functionality Testing

•User interface testing

•Usability testing

Volume Testing

Stress Testing

Security Testing

Performance Testing

•Installation and upgrade testing

•Standards conformance testing

Configuration testing

Network and distributed environment testing

Forward / backward compatibility testing

Reliability Testing

Error / disaster handling and recovery testing

Serviceability Testing

Documentation testing

Procedure testing

Localization testing

Acceptance Test Plan (ATP)

•Plan for the entire Acceptance Testing in the project

•Identifies the features covered during Acceptance Testing

•Each feature in the URD mapped to a Acceptance Test Case ID

•Recommended to prepare at URD phase itself

•The test cases are designed and documented

•References to the Test Case document(s) shall be given in ATP

•Deliverables at Acceptance Testing Phase shall be identified in ATP

•Resources required shall be mentioned in Project Plan

•Schedules shall be planned

Alpha Test

Beta Test

All the tests covered under system testing can be repeated as per user requirements

Terminology

Functionality Testing

Functionality testing is the determination of whether each functionality mentioned in SRS is actually implemented. The objective is to ensure that all the functional requirements as documented in the SRS are accomplished.

User Interface Testing

Focus is on testing the user interface, navigation and negative user behavior

Concurrent Execution Testing

Focus is on testing with simple usage, standard usage and boundary situations

Volume Testing

Volume Testing is to ensure that the software

can handle the volume of data as specified in the SRS

does not crash with heavy volumes of data, but gives an appropriate message and/or makes a clean exit.

Example:

A compiler would be fed an absurdly large source program to compile

Stress Testing

Executes the software in a manner that demands resources in abnormal quantity, frequency and volume and verifies that the system either performs normally, or displays message regarding the limitations of the system

Usability Testing

Usability testing is an attempt to uncover the software usability problems involving the human-factor

Example:

Are the error messages meaningful, easy to understand?

Security Testing

Attempts to verify that protection mechanism built into a system will in fact protect it from improper penetration

Example:

a database management system's data security mechanisms

Performance Testing

Tests the run-time performance of software within the context of an integrated system

Example:

the response times under certain configuration conditions.

Installation and Upgrade Testing

Focus is on testing whether the user would be install and upgrade the system

Configuration Testing

Configuration testing includes either or both of the following:

testing the software with the different possible hardware configurations

testing each possible configuration of the software

Network and Distributed Environment Testing

Focus is on testing the product in the required network and distributed environment

Dependency (API) Testing

Focus is on testing the API calls made by the system to other systems

Localization Testing

Focus is on testing problems associated with multiple languages support, conversion and also hardware aspects

Reliability Testing

The various software testing processes have the goal to test the software reliability. The "Reliability Testing" which is a part of System Testing encompasses the testing of any specific reliability factors that are stated explicitly in the SRS

Error / Disaster Handling and Recovery Testing

Forces the software to fail in a variety of ways and verifies that the system recovers and resumes processing

Serviceability Testing

Serviceability testing covers the serviceability or maintainability characteristics of the software

Example:

service aids to be provided with the system, e.g., storage-dump programs, diagnostic programs

the maintenance procedures for the system

Documentation Testing

Documentation testing is concerned with the accuracy of the user documentation. This involves

Review of the user documentation for accuracy and clarity

Testing the examples illustrated in the user documentation by preparing test cases on the basis of these examples and testing the system

Smoke Testing

Focus is on testing the system releases as it is being built to uncover critical and showstopper errors

Configuration Testing

Configuration testing includes either or both of the following:

testing the software with the different possible hardware configurations

testing each possible configuration of the software

Procedure Testing

If the software forms a part of a large and not completely automated system, the interfaces of the developed software with the other components in the larger system shall be tested. These may include procedures to be followed by

The human operator

Database administrator

Terminal user

These procedures are to be tested as part of System testing

Standards Conformance Testing

–Focus is on testing whether the product conforms to prescribed and published standards

Alpha Test

–Within a vendor, the last level of internal test prior to a limited beta release

Beta Test

–The vendor’s equivalent of a pilot test (usually with a greater number of participating sites than for a pilot)

Alpha Test

–Within a vendor, the last level of internal test prior to a limited beta release

Beta Test

–The vendor’s equivalent of a pilot test (usually with a greater number of participating sites than for a pilot)

Equivalence Test

–A test using only a small sample of all possible test conditions, but ones which are chosen to uncover almost as many defects as an exhaustive test would have uncovered

– The key question is :

•What subset of all possible test cases has the highest probability of detecting the most errors ?

Regression Testing:

Comprehensive re-test of an entire system :

•after system delivery, when a modification has been made, or

•before delivery and at the end of system test , after all test cases have been passed, but not passed together against the final version of the system product

Regression Testing:

•Regression testing requires that a regression test bed (comprehensive set of re-usable system test cases) be available throughout the useful life of the delivered system

•Functional acceptance tests form the core of this regression test bed.

Regression Testing:

•The regression test must be maintained, to keep it aligned with the system as the system itself evolves . This maintenance may not be a trivial effort.

•If a careful determination is made that only portions or subsystems will be affected by a particular change (i.e., the subsystems are decoupled and insulated), then only a partial regression re-test of the affected portions is absolutely necessary

Regression Testing:

•If errors are detected during on-going system operation, then test case(s) should be added to the existing regression test bed to detect any possible further re-occurrences of the error or related errors

•The additional effort to build a regression test facility is relatively minor if it is done during system development : the attitude should be that test cases are designed and organized to have an on-going life after system delivery, not one-time throw-aways

•The decision to perform any particular regression test is based on an analysis of the specific risks and costs

Regression Testing:

•Regression tests are more manageable and cost-effective they are coordinated with scheduled releases, versus a piece-meal approach to system modification

•Full regression testing should always be performed when the overall system architecture has been affected

•A guideline for setting the boundaries of regression testing: include any interdependent, integrated applications; exclude tangential applications.

Assessing Test Effectiveness

Techniques that provide means to assess the effectiveness, coverage and robustness of a set of test cases:

Error Seeding

Mutation Analysis

Note : These techniques are talked about more than they are practiced. Many people see them as somewhat impractical because they can be difficult to apply well.

Error Seeding

Approach:

Inject a small number of representative defects into the baseline product, and measure the percentage that are uncovered by test strategy variations

Purpose:

Determine the effectiveness of the test planning and execution, and predict (by extrapolation) how many real defects remain hidden in the product

Example :

No. found % Total defects

Seeded defects 21 75% 8 (actual)

Actual defects 241 75% 321(projected)

Estimated Unfound defects : 81

Mutation Analysis

Approach:

Create numerous (minor) variations of the baseline system or program. Then test each variation with the original, unchanged set of test cases, and determine how many of the tests behave as if the product was not changed

Purpose:

Mutations which do not test differently from the baseline product are carefully examined, to determine whether in fact the tests are inadequate

0 comments:

Post a Comment