Software Testing Guide Part II

11.1 Objective

The purpose of system testing is showing that the product is inconsistent with its original objectives. System testing is oriented toward a distinct class of errors and measured with respect to a distinct type of documentation in the development process. It could very well be partially overlapped in time with other testing process. Care must be taken so that no component / class of error is missed as this is the last phase of testing.

System Testing of software is divided into four major types:

· Functional System Testing;

· Regression Testing;

· Performance Testing; and

· Sanity Testing.

The system testing will be designed to test each functional group of software modules in a sequence that is expected in production. In each functional testing area, the following will be tested at a minimum:

· Initial inputs;

· Program modifications and functionality (as applicable);

· Table and ledger updates; and

· Error conditions.

System Test cases are designed by analyzing the objectives and then formulated by analyzing the user documentation.

Different categories of test cases are given below:

· Facility Testing

· Volume Testing

· Stress Testing

· Usability Testing

· Security Testing

· Performance Testing

· Storage Testing

· Configuration Testing

· Compatibility/Conversion Testing

· Installability Testing

· Reliability Testing

· Recovery Testing

· Serviceability Testing

· Documentation Testing

· Procedure Testing

Time to Plan and Test : Test planning starts with the preparation of Software Requirement specification. Testing starts after the completion of unit testing and integration testing.

Responsibilities:

Project Manager/ Project Leader : They are responsible for the following activities:

· Preparation of test plan

· Obtain existing Test cases and checklist

· Responsible for getting the test plan and test cases reviewed and approved by using peer review or any other suitable method.

· Communication with client

· Project Tracking and reporting

Test Engineers: Their responsibilities include:

· Preparation of test cases

· Actual testing

· Actual test record on checklist

· Detail defect report

11.2 Processes to be followed in the activities

· Development and review the system test plan and test results;

· Provide training for the system testers;

· Designate a “final authority” to provide written sign-off and approvals of all deliverables in each implementation area. Once the person designated as the final authority approves, in writing, the deliverables, they will be considered final and the Project Team will proceed with migration to the user acceptance testing environment;

· Execute the system tests, including all cycles and tests identified in the plans;

· Resolve issues that arise during testing using formal issue resolution process; and

· Document test results.

11.3 System Test Team

· Few professional system-test experts

· A representative end user or two

· A human-factors engineer

· Key original analysts or designers of the program

Perhaps the most economical way of conducting a system test is to subcontract it to a separate company.

11.4 Hypothetical Estimate of when the errors might be found

Coding and logic-design errors Design errors

Module Test

Function Test

System Test

Total

65% 0%

30% 60%

3% 35%

__________________________________________________

98% 95%

11.5 Input

· Software Requirement Specification

· Design Document

· Project Plan

· Existing testcases / checklists, if any

· User documents

· Unit and Integration test results

11.6 Deliverables:

· System Test Plan

· Scripting;

· Test Cases

· System Test Results;

· System Tested Software.

11.7 Various Methods of System Testing

11.7.1 Functional Testing

A functional test exercises a system application with regard to functional requirements with the intent of discovering non-conformance with end-user requirements. This technique is central to most software test programs. Its primary objective is to assess whether the application does what it is supposed to do in accordance with specified requirements.

Test development considerations for functional tests include concentrating on test procedures that execute the functionality of the system based upon the project’s requirements. One significant test development consideration arises when several test engineers will be performing test development and execution simultaneously. When these test engineers are working independently & sharing the same data or database, a method needs to be implemented to ensure that the test engineer A does not modify or affect the data being manipulated by test engineer B, potentially invalidating the test results produced by test engineer B. Also, automated test procedures should be organized in such a way that effort is not duplicated.

The steps of functional testing are :

· Decompose & analyze the functional design specification.

· Partition the functionality into logical components and for each component make a list of the detailed functions.

· For each function, use the analytical black-box methods to determine inputs & outputs.

· Develop functional test cases.

· Develop a function coverage matrix.

· Execute the test cases & measure logic coverage

· Develop additional functional tests, as indicated by the combined logic coverage of function & system testing.

11.7.2 Security Testing

Security Testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration. Security Tests involve checks to verify the proper performance of system access & data access mechanisms. Test procedures are devised that attempt to subvert the programs security checks. The test engineer uses security tests to validate security levels & access limits and thereby verify compliance with specified security requirements and any applicable security regulations.

Objective of Security Testing :

a) To check that

Ÿ system is password protected

Ÿ users only granted necessary system privileges

b) Deliberately attempt to break the security mechanism by:

Ÿ accessing the files of another user

Ÿ breaking into the system authorization files

Ÿ accessing a resource when it is locked

11.7.3 Performance Testing

Performance Testing is designed to test the runtime performance of software within the context of an integrated system. It should be done throughout all steps in testing process. Even at the unit level, the performance of an individual module maybe assessed

Performance testing verify that the system application meets specific performance efficiency objectives. It can measure & report on such data as I/O rates, total no. of I/O actions, average database query response time & CPU utilization rates. The same tools used in stress testing can generally be used in performance testing to allow for automatic checks of performance efficiency.

To conduct performance testing, the following performance objectives need to be defined:

· How many transactions per second need to be processed?

· How is a transaction defined?

· How many concurrent & total users are possible?

· Which protocols are supported?

· With which external data sources or systems does the application interact?

Many automated performance test tools permit virtual user testing, in which the test engineer can simulate tens, hundreds or even thousands of users executing various testscripts.

11.7.4 Stress Testing

In stress testing, the system is subjected to extreme and maximum loads to find out whether and where the system breaks and to identify what breaks first. The system is asked to process a huge amount of data or perform many function calls within a short period of time. It is important to identify the weak points of the system. System requirements should define these thresholds and describe the system’s response to an overload. Stress testing should then verify that it works properly when subjected to an overload.

Examples of stress testing: – running a client application continuously for many hours or simulating a multi-user environment. Typical types of errors uncovered include memory leakage, performance problems, locking problems, concurrency problems, and excess consumption of system resources and exhaustion of disk space.

Stress tools typically monitor resource usage, including usage of global memory, DOS memory, free file handles, and disk space, and can identify trends in resource usage so as to detect problem areas, such as memory leaks and excess consumption of system resources and disk space.

11.7.5 Reliability Testing

The goal of all types of testing is the improvement of the eventual reliability of the program, but if the program’s objectives contain specific statements about reliability, specific reliability tests might be devised. For the objective of building highly reliable systems, the test effort should be initiated during the development cycle’s requirements definition phase, when requirements are developed & refined.

11.7.6 Usability Testing

Usability Testing involves having the users work with the product & observing their responses to it. It should be done as early as possible in the development life cycle. The real customer is involved as early as possible. The existence of the functional design specification is the prerequisite for starting.

Usability testing is the process of attempting to identify discrepancies between the user interfaces of a product and the human engineering requirements of its potential users. Usability Testing collects information on specific issues from the intended users. It often evaluation of a products presentation rather than its functionality.

Usability characteristics, which can be tested, include the following:

Accessibility, Responsibility, Efficiency, Comprehensibility.

11.7.7 Environment Testing

The testing activities here involve basically testing the environment setup activities as well as the calibration of the test tools to match the specific environment.

When checking the set-up activities, need to test the set-up script (if any), the integration & validation of resources – hardware, software, network resources, databases. The objective should be to ensure the complete functionality of the production application & performance analysis.

It is also necessary to check for stress testing requirements, where we require the use of multiple workstations to run multiple test procedures simultaneously.

q

11.7.8 Storage Testing

Products do have some storage specifications. For instance, the amounts of main & secondary storage used by the program & sizes of required temporary or spill files.

Checks should be made to monitor memory & backing storage occupancy & taking necessary measurements.

11.7.9 Installation Testing

Installation testing involves the testing of the installation procedures. Its purpose is not to find software errors, but to find installation errors i.e. to locate any errors made during the installation process.

Installation tests should be developed by the organization that produced the system, delivered as part of the system, and run after the system is installed. Among other things, the test cases might check to ensure that a compatible set of options has been created and have the necessary contents, and that the hardware configuration is appropriate.

11.7.10 Recovery Testing

The system must have recovery objectives, stating how the system is to recover from hardware failures, and data errors. These can be injected into the system to analyze the system’s reaction. A system must be fault tolerant; system failure must be corrected within a specific period of time

11.7.11 Volume Testing

This involves subjecting the program to heavy volumes of data. For instance, a compiler would be fed an absurdly large source program to compile. A linkage editor might be fed a program containing thousands of modules. If a program is supposed to handle files spanning multiple volumes, enough data are created to cause the program to switch from one volume to another. Thus, the purpose of volume testing is to show that the program cannot handle the volume of data specified in its objectives.

11.7.12 Error Guessing

Error Guessing is an ad hoc approach, based on intuition & experience, to identify tests, likely to expose errors. The basic idea is to make a list of possible errors or error-prone situations & then develop tests based on the list.

For instance, the presence of the value 0 in the program’s input or output is an error prone situation. Therefore, one might write test cases for which particular input values have a 0 value and for which particular output values are forced to 0.

Also, where variable number of input output can be present, the cases of “none” and “one” are error-prone situations. Another idea is to identify test cases associated with assumptions that the programmer might have made when reading the specification. - i.e. things that were omitted from the specification

Thus, some items to try are –

· Empty or null lists/strings

· Zero instances /occurrences

· Blank or null characters in strings

· Negative numbers

11.7.13 Data Compatibility Testing

Many programs developed are often replacements for some deficient system, either a data processing or manual system. Programs often have specific objectives concerning their compatibility with, and conversion procedures from, the existing system. Thus the objective of Compatibility testing is to determine whether the compatibility objectives of the program have been met & whether conversion procedures work.

11.7.14 User Interface testing

The User Interface is checked against the design or requirement specification. The user interface is tested as per the User manual, on-line help and SRS.

The test cases should be build for interface style, help facilities & the error handling protocol. Also, issues like – number of actions required per task & whether they are easy to remember & invoke, how self-explanatory & clear are the icons, how easy its to learn the basic system operations etc., need to be evaluated while conducting the User Interface Testing.

11.7.15 Acceptance Testing

The acceptance test phase includes testing performed for or by end users of the software product. Its purpose is to ensure that end users are satisfied with the functionality & performance of the software system. The acceptance test phase begins only after the successful conclusion of system testing.

Commercial software products do not generally undergo customer acceptance testing, but do often allow a large number of users to retrieve an early copy of the software, so that they can provide feedback as a part of beta test.

11.7.16 Limit testing

Limit Testing implies testing with the values beyond the specified limits –for e.g. memory, no of users, no of files open etc.

The test cases should focus on testing with out of range values i.e. with values, which exceeds the values as laid down in the specifications, which the system can well handle. Such cases should be included within every stage of the testing life cycle.

11.7.17 Error Exit Testing

This testing considers whether the software developed, in case of a system error, displays appropriate system error messages & thereafter provides a clear exit.

All possibilities of the system error cropping up, should be tested, except those causing abnormal termination.

11.7.18 Consistency testing

The system developed should have consistency throughout the system with respect to both data & modules. The interrelated modules should be using the same set of data, retrieving/writing the data to the same common place, & thus reflecting uniformity over the entire system. Thus test cases should be developed involving those sample data & methods, which will provide an insight as to the system ensures consistency or not.

11.7.19 Help Information Testing

Help information should be adequate and provide useful information. The contents should cover all significant areas of the system, on which the users might be requiring help. The flow of the help information should be sequential and the links embedded in the document must be relevant & must be even tested for as to whether actually provides the link or not. The contents should also be correct, & tested for its clarity & completeness.

11.7.20 Manual procedure testing

In this method of test, the system is tested for manual device requirement and handling. For e.g.- this could be related to tape loading, manual switching of any device etc. The system should recognise all those devices, successfully do the task of loading/unloading, and also smooth working with such devices. Also, any prescribed human procedures, such as procedures to be performed by the system operator, database administrator, or terminal user should be tested during the system test.

11.7.21 User information Testing

The User information Testing is also concerned with the adequacy & correctness of the user documentation. It should be determined whether the user manual gives a proper representation of the system. Also, it should be tested for clarity & that whether it’s easy to look for any information related to the system.

12 Testing GUI Applications

12.1 Introduction

The most obvious characteristic of GUI applications is the fact that the GUI allows multiple windows to be displayed at the same time. Displayed windows are ‘owned’ by applications and of course, there may be more than one application active at the same time. Access to features of the systems is provided through mechanisms menu bars buttons and keyboard shortcuts. GUIs free the user to access system functionality in their preferred way. They have permanent access to all features and may use the mouse, the keyboard or a combination of both to have a more natural dialogue with the system.

12.1.1 GUIs as universal client

GUIs have become the established alternative to traditional forms-based user interfaces. GUIs are the assumed user interface for virtually all systems development using modern technologies.

12.2 GUI Test Strategy

12.2.1 Test Principles Applied to GUIs

The approach concentrates on GUI errors and using the GUI to exercise tests so is very-oriented toward black box testing.

· Focus on errors to reduce the scope of tests

· We intend to categorise errors into types and design test to detect each type of error in turn. In this way, we can focus the testing and eliminate duplication.

· Separation of concerns (divide and conquer)

· By focusing on particular types of error and designing test cases to detect those errors, we can break up the complex problem into a number of simpler ones.

· Test design techniques where appropriate

· Traditional black box test techniques that we would use to test forms based applications are still appropriate.

· Layered and staged tests. Organise the test types into a series of test stages. We implement integration tests of components and test the integrated application last. In this way, we can build the testing up in trusted layers.

· Test automation...wherever possible. Automation most often fails because of over-ambition. By splitting the test process into stages, we can seek and find opportunities to make use of automation where appropriate, rather than trying to use automation everywhere.

12.3 Types of GUI errors

We can list some of the multifarious errors that can occur in a client/server-based application that we might reasonably expect to be able to test for using the GUI. Many of these errors relate to the GUI, others relate to the underlying functionality or interfaces between the GUI application and other client/server components.

· Data validation

· Incorrect field defaults

· Mishandling of server process failures

· Mandatory fields, not mandatory

· Wrong fields retrieved by queries

· Incorrect search criteria

· Field order

· Multiple database rows returned, single row expected

· Currency of data on screens

· Window object/DB field correspondence

· Correct window modality?

· Window system commands not available/don’t work

· Control state alignment with state of data in window?

· Focus on objects needing it?

· Menu options align with state of data or application mode?

· Action of menu commands aligns with state of data in window

· Synchronisation of window object content

· State of controls aligns with state of data in window?

By targeting different categories of errors in this list, we can derive a set of different test types that focus on a single error category of errors each and provide coverage across all error types.

12.4 Four Stages of GUI Testing

The four stages are summarised in Table 2 below. We can map the four test stages to traditional test stages as follows:

· Low level - maps to a unit test stage.

· Application - maps to either a unit test or functional system test stage.

· Integration - maps to a functional system test stage.

· Non-functional - maps to non-functional system test stage.

The mappings described above are approximate. Clearly there are occasions when some ‘GUI integration testing’ can be performed as part of a unit test. The test types in ‘GUI application testing’ are equally suitable in unit or system testing. In applying the proposed GUI test types, the objective of each test stage, the capabilities of developers and testers, the availability of test environment and tools all need to be taken into consideration before deciding whether and where each GUI test type is implemented in your test process.

The GUI test types alone do not constitute a complete set of tests to be applied to a system. We have not included any code-based or structural testing, nor have we considered the need to conduct other integration tests or non-functional tests of performance, reliability and so on. Your test strategy should address all these issues.

Stage

Test Types

Low Level

Checklist testing , Navigation

Application

Equivalence Partitioning, Boundary Values

Decision Tables, State Transition Testing

Integration

Desktop Integration, C/S Communications

Synchronization

Non-Functional

Soak testing, Compatibility testing

Platform/environment

12.5 Types of GUI Test

12.5.1 Checklist Testing

Checklists are a straightforward way of documenting simple re-usable tests. The types of checks that are best documented in this way are:

· Programming/GUI standards covering standard features such as: window size, positioning, type (modal/non-modal), standard system commands/buttons (close, minimise, maximise etc.)

Application standards or conventions such as: standard OK, cancel, continue buttons, appearance, colour, size, location consistent use of buttons or controls object/field labelling to use standard/consistent text.

12.5.2 Navigation Testing

In the context of a GUI, we can view navigation tests as a form of integration testing. To conduct meaningful navigation tests the following are required to be in place:

· An application backbone with at least the required menu options and call mechanisms to call the window under test.

· Windows that can invoke the window under test.

· Windows that are called by the window under test.

0 comments:

Post a Comment