Software Testing Guide Part I

1. Purpose :

The purpose of the document is to define the methodology for software testing and to guide and facilitate the use of the methodology in testing software product. The focus is on reducing the software Quality Control costs by setting up streamlined processes.

2. Scope :

This document is meant for providing the guidelines for testing processes. All testing projects executed under any software development company would follow these guidelines for the processes.

3. Audience :

Target audiences of this document are the Project Managers, Project Leaders, Test Personnel and any new person with some basic understanding of Software Engineering, joining this activity.

4. Organization Of The Manual:

This guideline covers various aspects of testing process. It covers the topics as given below:

Section #

Topic Covered

Brief Description of the Topic

1

Introduction to S/W Testing

A brief description of S/W Testing and its background.

2

Testing principles and myths

Says about various testing principles and underlying mis-conceptions.

3

Testing Philosophies

Says about the general steps to be followed in the process.

4

Automated S/W testing

Covers principles of automating s/w testing and usage of various testing tools.

5

Code Coverage

Deals on usage of code coverage method in testing.

6

System testing

Gives an overall idea on system testing including various methods of system testing.

7

Testing GUI application

Gives guidelines on GUI testing

8

Client Server testing

Gives a brief description on principles of client/server testing.

9

Web Testing

Gives ideas to test web based applications.

10

Guidelines to prepare test plan

Covers different sections of a test plan and its contents in general.

11

Guidelines for test specification

Gives guidelines to prepare test specifications.

12

Appendix

Gives a list of testing tools and a sample of system test plan.

5. Abbreviations :

GUI – Graphical User Interface

SUT – Software Under Test

DUT – Device Under Test

C/S - Client Server

6. Introduction to Software Testing

Testing is a process to detect the difference between the observed and stated behaviour of software. Due to the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructivenature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.

Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software’s reliability and quality. But, testing cannot show the absence of defect—it can only show that software defects are present. Actually, testing is used to confirm quality than to achieve it.

7. Testing Principles

Some of the Principles of Testing in different phases of testing are:

Testing Project Management:

· Testing is the process of executing a program with the intent of finding error.

· Do not plan a testing effort under the assumption that no errors will be found.

Preparation of test cases:

· Study project priorities while deciding on the testing activities. E.g. for an on-line system, pay more attention to response time. Test carefully the features used frequently.

· A good test case is one that has a high probability of detecting an as-yet-undiscovered error.

· A successful test case is one that detects an as-yet-undiscovered error.

· A necessary part of a test case is a definition of the expected output or result.

· Test cases must be written for invalid and unexpected, as well as valid and expected, input conditions.

· Avoid throwaway test cases unless the program is truly a throwaway program.

Testing :

· A programmer should avoid attempting to test his or her own program.

· A programming organisation should not test its own programming.

· Thoroughly inspect the results of each test.

· Examining a program to check what it is supposed to do is only half of the battle. The other half is seeing whether the program does what it is NOT supposed to do.

· The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section.

· Testing is an extremely creative and intellectually challenging task.

· Tools should be used for better control over testing and improve productivity.

Software Testing Myths :

A test process that complements object-oriented design and programming can significantly increase reuse, quality, and productivity. Establishing such a process usually means dealing with some commonmis-perceptions (myths) about testing software. This article is about these perceptions. We’ll explore these myths and their assumptions and then explain why the myth is at odds with reality.

Myth 1: Testing is unnecessary—

Reality: Human error is as likely as ever. With iterative and incremental development we obviate the need for a separate test activity, which was really only necessary in the first place because conventional programming languages made it so easy to make mistakes.

Myth 2: Testing gets in the way.

Reality: Testing can be a complementary, integral part of development. The idea of testing to find faults is fundamentally wrong—all we need to do is to keep “improving” our good ideas. The simple act of expression is sufficient to create trustworthy classes. Testing is a destructive, rote process—it isn’t a good use of developer’s creative abilities and technical skills.

Myth 3: Testing is structured/waterfall idea—it can’t be consistent with incremental object-oriented development. Objects evolve—they aren’t just designed, thrown over the wall for coding, and over another wall for testing. What’s more, if you test each class in a system separately then you have to do “big-bang” integration, which is an especially bad idea with object-oriented systems.

Reality: Testing can be incremental and iterative. While the iterative and incremental nature of object-oriented development is inconsistent with a simple, sequential test process (test each unit, then try to integrate test all of them, then do system test), it does not mean that testing is irrelevant. The boundary that defines the scope of unit and integration testing is different for object-oriented development. Tests can be designed and exercised at many points in the process. Thus “design a little, code a little” becomes “design a little, code a little, test a little.”

Myth 4: Testing is trivial. Testing is simply poking around until you run out of time. All we need to do is start the app, try each use-case, and try some garbage input. Testing is neither serious nor challenging work—hasn’t most of it has already been automated?

Reality: Hunches about testing completeness are notoriously optimistic. Adequate testing requires a sophisticated understanding of the system under test. You need to be able to develop an abstract view of the dynamics of control flow, data flow, and state space in a formal model, system requirements. You need to be able to define the expected results for any input and state you select as a test case. This is interesting work for which little automation is available.

Myth 5: Automated GUI testing is sufficient. If a system is automatically exercised by trying permutations of GUI commands supplied by a command playback tool, the underlying application objects will be sufficiently tested.

Reality: GUI-based tests may be little more than automated testing-by-poking-around. While there are many useful capture/playback products to choose from, the number of hours a script runs has no direct or necessary correlation with the extent that the system under test has been exercised. It is quite possible to retest the same application logic over and over, resulting in an inflated confidence. Further, GUI test tools are typically of little use for objects in embedded systems.

Myth 6: If programmers were more careful, testing would be unnecessary. Extra effort, extra pressure, or extra incentive can eliminate programming errors. Bugs are simply an indication of poor work habits. These poor work habits could be avoided if we’d use a better management strategy.

Reality: Many bugs only surface during integration. There are many interactions among components that cannot be easily foreseen until all or most components of a system are integrated and exercised. So, even if we could eliminate all individual sources of error, integration errors are highly likely. Static methods cannot reveal interaction errors with the target or transient performance problems in hard real-time systems.

Myth 7: Testing is inconsistent with a commitment to quality. Testing assumes faults have escaped the design and programming process. This assumption is really just an excuse for sloppy development. All bugs are due to errors that could be avoided if different developer behaviour could be induced. This perception is often a restatement of the preceding sloppy programmer myth

Reality: Reliable software cannot be obtained without testing: Testing activities can begin and proceed in parallel with concept definition, OOA, OOD, and programming. When testing is correctly interleaved with development, it adds considerable value to the entire development process. The necessity of testing is not an indictment of anything more than the difficulty of building large systems.

Myth 8: Testing is too expensive—we don’t have time. To test beyond testing-by-poking- around takes too much time and costs too much. Test tools are an unnecessary luxury since all we need are a few good pokes. Besides, projects always slip—testing time gets squeezed anyway.

Reality: Pay me now, or pay me much more later. The cost of finding and correcting errors is always higher as the time between fault injection and detection increases. The lowest cost results when you prevent errors. If a fault goes unnoticed, it can easily take hours or days of debugging to diagnose, locate, and correct after the component is in widespread use.

Myth 9: Testing is the same (as it is with conventional software). The only kind of testing that matters is “black-box” system testing, where we define the externally observable behaviour to be produced from a given input. We don’t need to use any information about the implementation to select these tests and their test inputs.

Reality: OO code structure matters. Effective testing is guided by information about likely sources of error. The combination of polymorphism, inheritance, and encapsulation are unique to object-oriented languages, presenting opportunities for error that do not exist in conventional languages. Our testing strategy should help us look for these new kinds of errors and offer criteria to help decide when we’ve done enough looking. Since the “fundamental paradigm shift” often touted for object-oriented development has lead to some new points of view and representations, our techniques for extracting test cases from these representations must also change.

8. Testing Philosophies

The following steps should be followed for a testing project: Project Manager shall start preparation of test plan – identifying the requirements and plan for test set-up and dividing the various tasks among the product testing team. A sample System Test Plan is given in Appendix –2.

1. Identify the requirements for training of the team members and draw plan to impart it. Get the plan sanctioned by the Group Head. If it can be arranged with internal resources, Project Co-ordinator can arrange it and forward the training details to Head – Training Department for record purpose. In case the training has to be arranged with external resources, inform Head – Training Department for implementation.

2. Obtain the test cases and checklists. If customer has not given any test cases, then the team members will go through the user manual and other relevant documents and develop the test plan and test cases along with checklists to be used. The project co-ordinator will get the test plan and test cases reviewed and approved by using peer review or any suitable method. The Team Leader/Group Head should approve the test plan and test cases.

3. The test cases obtained from the client shall be considered as the original ones. The testers should make a copy of their allocated test cases and maintain a master copy. Similarly they should again copy the checklists, which would be used. Review the checklists and test cases – for their completeness. If required, the test cases may be added or modified. These modifications would have to be reviewed and approved.

4. Install the software and verify proper installation. If installation fails, inform the customer and despatch the product testing report. If required, return other materials like the original CDs of the product. Wait for next build from customer.

5. If the installation is proper, then prepare the database for storing the defects and apply the test cases. Execute the testing as planned. First test the basic features and functions of the software, then go for integration testing, simulation and stress testing.

The testers should maintain the following records in everyday testing:

· Actual test records on test checklist (recording of pass/fail against the test cases)

· Detail Defect Report,

6. The Project Co-ordinators should maintain the Weekly Project Status Report (to be sent to Client representative for status updating).

7. If any project has a particular record to maintain as per client’s choice, it should be adjusted with the suggested records.

8. Consolidate the defects found at each stage of testing and prepare a defect summary report along with the Product Testing report stating the extent to which the product meets its stated specifications.

9. At the end of the project, Close the product testing, get report signed by Group Head
and send it to customer.

9. Automated Software Testing

Automated Software Testing, as conducted with today’s automated test tools, is a development activity that includes programming responsibilities similar to those of the SUT developer. Automated test tools generate code comprising test scripts while exercising a user interface. This code can be modified and reused to serve as automated test scripts for other applications in lesser time.

Software testing tools makes the job easier. Now lots of standard tools are available in the market, ready to serve the purpose. Similarly, one can develop his own customized test tool to serve his particular requirement or can use the standard tools and then ‘instruct’ it to get the desired service.

Usage of tools should be decided at the planning stage of the project. And testing activity should also be started from the Requirement Specification Stage. Though early start of testing is recommended for any project, irrespective of usage of tools, it seems to be a must for projects that use tools. The reason lies in preparation of test cases. Test cases for System Testing should be prepared once the SRS is baselined. Test cases then would be modified with evolution of design spec, code and of course, actual software.

This process also requires a well-defined test plan and it should be prepared right after project plan. Most of the standard tools work with ‘capture/playback’ method. The tool has a capture facility. While using the tool, tester would first run the tool, start its capture facility and then run the SUT according to a test case. With its capture facility, the tool will record all the steps performed by users including keystrokes, mouse activity and selected output. Now the user can playback the recorded steps for a test case automatically driving the application and validating the results by comparing them to the previously saved baseline.

When the tool records the steps, it automatically generates a piece of code that is known as a test script. Now, the tester can modify the code to enable the script performing some desired activity. Tester can write his own test scripts also. A number of test cases can be combined in a test suite and the tester can schedule a few of test suite to run in night or any off time unattended. The result log would be automatically generated and stored for review.

Here is the basic paradigm for GUI-based automated regression testing:

a. Design a test case, then run it.

b. If the program fails the test, write a bug report. Restart after the bug is fixed.

c. If the program passes the test, automate it. Run the test again (either from a script or with the aid of a capture utility). Capture the screen output at the end of the test. Save the test case and the output.

d. Next time, run the test case and compare its output to the saved output. If the outputs match, the program passes the test.

Benefits of Automated Testing

The use of automated testing can improve all areas of testing, including test procedure development, test execution, test results analysis etc. It also supports all test phases including unit, integration, regression, system, acceptance, performance and stress testing.

Some of the Tools

Various types of test tools available for use throughout the various life-cycle phases, supporting the automated testing process. One sample list of software tools with lifecycle phase is given in Appendix –1.

Problems with Automated Testing

There are some popular myths that :

q People can use these tools to quickly create extensive test suites.

q They are easy to use, maintenance of the test suites is not a problem and

q A manager can save money and time and can ship software sooner by using one of these tools to replace human testers.

Actually there are many pitfalls in automated testing and there has to be some proper planning for its implementation.

Few known pitfalls are:

a. This is not cheap and time consuming also. It usually takes 3 to 10 times as long to create, verify and minimally document [1] the automated test compared to manual test. Many tests will be worth automating, but for all the tests that are run once or twice, it is not worthwhile.

b. These tests are not powerful.

c. In practice, many test groups automate only the easy-to-run tests.

d. Slightest change in UI will make the script invalid.

Suggested strategies for success:

a. Reset management expectations about the timing of benefits from automation

b. Recognize that test automation development is software development: Automation of software testing is just like all the other automation efforts that software developers engage in – except that this time, the testers are writing the automation code.

a. Within an application dedicated to testing a program, every test case is a feature and

b. Every aspect of the underlying application is data.

c. Use a data-driven architecture

d. Use a framework based architecture

e. Recognize staffing realities

f. Consider using other types of automation.

10 Code coverage

A perfectly effective test suite would find every bug. Since we don’t know how many bugs there are, we can’t measure how closely test suites approach perfection. Consequently, we use an index as an approximate measure of test suite quality: since we can’t measure what we want, we measure something related.

With coverage, we estimate test suite quality by examining how thoroughly the tests exercise the code:

§ Is every if statement taken in both the true and false directions?

§ Is every case taken? What about the default case?

§ Is every while loop executed more than once? Does some test force the while loop to be skipped?

§ Is every loop executed exactly once?

§ Do the tests probe off-by-one errors?

The main technique for demonstrating that the testing has been thorough is called test coverage analysis. Simply stated, the idea is to create, in some systematic fashion, a large and comprehensive list of tasks and check that each task is covered in the testing phase. Coverage can help in monitoring the quality of testing, assist in creating tests for areas that have not been tested before, and help with forming small yet comprehensive regression suites.

Coverage, in general, can be divided into two types: code-based or functional. Code-based coverage concentrates on measuring syntactic properties in the execution, for example, that each statement was executed, or each branch was taken. This makes program-based coverage a generic method which is usually easy to measure, and for which many tools are available. Examples include program based coverage tools for C, C++ and Java . Functional coverage, on the other hand, focuses on the functionality of the program, and it is used to check that every aspect of the functionality is tested. Therefore, functional coverage is design and implementation specific, and is more costly to measure.

Few general code coverage categories are :

Control-flow Coverage

Block Coverage

Data-flow Coverage

Few other types and alternate names are :

Table

Types of verification coverage

Coverage type

Alternate names

Statement execution

Line, statement, block, basic block, segment

Decision

Branch, all edges

Expression

Condition, condition-decision, all edges, multiple condition

Path

Predicate, basis path

Event

(None)

Toggle

(None)

Variable

(None)

State machine

State value, state transition, state scoring, variable transition, FSM

Code based coverage, usually just called coverage, is a technique that measures the execution of tests against the source code of the program. For example, one can measure whether all the statements of the program have been executed. The main uses of program based coverage are assessing the quality of the testing, finding missing requirements in the test plan and constructing regression suites.

A number of standards, as well as internal company policies, require the testing program to achieve some level of coverage, under some model. For example, one of the requirements of the ABC standard is 100% statement coverage.

Now-a-days lots of software tools are available. Almost all coverage tools implement the statement and branch coverage models. Many tools also implement multi-condition coverage, a model that checks that each part of a condition (e.g. A or B and C) had impact. Fewer tools implement the more complex models such as define-use, mutation, and path coverage variants.

The main advantage of code based coverage tools is their simplicity of use. The tools come ready for the testing environment. No special preparations are needed in the programs and understanding the feedback from the tool is straightforward. The main disadvantage of code coverage tools is that the tools do not “understand” the application domain. Therefore, it is very hard to tune the tools to areas which the user thinks are of significant. These “defects” can be removed by the use of simple scripting languages like VB / Perl / Tcl-tk.

10.1 General Guidelines for Usage of Coverage


Coverage should not be used if the resources used for it can be better spent elsewhere. This is the case when the budget is very tight and there is not enough time to even finish the test plan. In such a case, designing new tests is not useful as not all the old tests will be run. Coverage should be used only if there is a full commitment to make use of the data collected. Measuring coverage in order to report coverage percentile is practically worthless. Coverage points out parts of the applications that have not been tested and guides test generation to these parts. Moreover, it is very important to try to reach full coverage or at least set high coverage goals, since many bugs hide in hard-to-reach places.

Coverage is a very useful criterion for test selection for regression suites. Whenever a small set of tests is needed, the test suite should be selected so that it will cover as many requirements or coverage tasks as possible.

When coverage and reviews are used for the same project reviews can put less emphasis on things that coverage is likely to find. For example, a review for dead code is unnecessary if statement coverage is used, and manually checking that some values of variable can be attained is not needed if the appropriate functional coverage model is used.

Coverage should not be used to judge if the “desirable” features are implemented.

11 System Testing

A system is the big component

System testing is aimed at revealing bugs that can’t be attributed to a component as such ,to inconsistencies between components or planned interactions between components

Deals with issues, behaviour that can only be exposed by testing the entire/ integrated system e.g. performance, security etc.

11.1 Objective

The purpose of system testing is showing that the product is inconsistent with its original objectives. System testing is oriented toward a distinct class of errors and measured with respect to a distinct type of documentation in the development process. It could very well be partially overlapped in time with other testing process. Care must be taken so that no component / class of error is missed as this is the last phase of testing.

0 comments:

Post a Comment