Software Testing Guide Part III

Obviously, if any of the above components are not available, stubs and/or drivers will be necessary to implement navigation tests. If we assume all required components are available, what tests should we implement? We can split the task into steps:

· For every window, identify all the legitimate calls to the window that the application should allow and create test cases for each call.

· Identify all the legitimate calls from the window to other features that the application should allow and create test cases for each call.

· Identify reversible calls, i.e. where closing a called window should return to the ‘calling’ window and create a test case for each.

· Identify irreversible calls i.e. where the calling window closes before the called window appears.

There may be multiple ways of executing a call to another window i.e. menus, buttons, keyboard commands. In this circumstance, consider creating one test case for each valid path by each available means of navigation. Note that navigation tests reflect only a part of the full integration testing that should be undertaken. These tests constitute the ‘visible’ integration testing of the GUI components that a ‘black box’ tester should undertake.

12.5.3 Application Testing

Application testing is the testing that would normally be undertaken on a forms-based application. This testing focuses very much on the behaviour of the objects within windows. Some guidelines for their use with GUI windows are presented in the table below:

Equivalence Partitions and

Boundary Value Analysis

· Input validation

· Simple rule-based processing

Decision Tables

· Complex logic or rule-based processing

State-transition testing

· Applications with modes or states where processing behaviour is affected Windows where there are dependencies between objects in the window.

12.5.4 Desktop Integration Testing

Client/server systems assume a ‘component based’ architecture so they often treat other products on the desktop as components such as a word processor, spreadsheet, electronic mail or Internet based applications and make use of features of these products by calling them as components directly or through specialist middleware.

We define desktop integration as the integration and testing of a client application with these other components.

· The tester needs to know what interfaces exist, what mechanisms are used by these interfaces and how the interface can be exercised by using the application user interface.

To derive a list of test cases the tester needs to ask a series of questions for each known interface:

· Is there a dialogue between the application and interfacing product (i.e. a sequence of stages with different message types to test individually) or is it a direct call made once only?

· Is information passed in both directions across the interface?

· Is the call to the interfacing product context sensitive?

· Are there different message types? If so, how can these be varied?

12.5.5 Synchronisation Testing

There may be circumstances in the application under test where there are dependencies between different features. Examples of synchronisation are when:

· The application has different modes - if a particular window is open, then certain menu options become available (or unavailable).

· If the data in the database changes and these changes are notified to the application by an unsolicited event to update displayed windows.

· If data on a visible window is changed and makes data on another displayed window inconsistent.

In some circumstances, there may be reciprocity between windows. For example, changes on window A trigger changes in window B and the reverse effect also applies (changes in window B trigger changes on window A).

In the case of displayed data, there may be other windows that display the same or similar data which either cannot be displayed simultaneously, or should not change for some reason. These situations should be considered also. To derive synchronisation test cases:

· Prepare one test case for every window object affected by a change or unsolicited event and one test case for reciprocal situations.

· Prepare one test case for every window object that must not be affected - but might be.

12.6 Non-functional Testing of GUI

The tests described in the previous sections are functional tests. These tests are adequate for demonstrating the software meets it’s requirements and does not fail. However, GUI applications have non-functional modes of failure also. We propose three additional GUI test types (that are likely to be automated).

12.6.1 Soak Testing

Soak tests exercise system transactions continuously for an extended period in order to flush- out memory leaks problems.

These tests are normally conducted using an automated tool.

Selected transactions are repeatedly executed and machine resources on the client (or the server) monitored to identify resources that are being allocated but not returned by the application code.

12.6.2 Compatibility Testing

Compatibility Tests are (usually) automated tests that aim to demonstrate that resources that are shared with other desktop products are not locked unnecessarily causing the system under test or the other products to fail.

These tests normally execute a selected set of transactions in the system under test and then switch to exercising other desktop products in turn and doing this repeatedly over an extended period.

12.6.3 Platform/Environment Testing

In some environments, the platform upon which the developed GUI application is deployed may not be under the control of the developers. PC end-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT. Application may be designed to operate on a variety of platforms; you may have to execute tests of these various configurations to ensure when the software is implemented, it continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests on each of the platforms and configurations as required.

12.7 Automating GUI Tests

12.7.1 Justifying Automation

Automating test execution is normally justified based on the need to conduct functional regression tests. In organisations currently performing regression test manually, this case is easy to make - the tool will save testers time. However, most organisations do not conduct formal regression tests, and often compensate for this ‘sub-consciously’ by starting to test late in the project or by executing tests in which there is a large amount of duplication.

In this situation, buying a tool to perform regression tests will not save time, because no time is being spent on regression testing in the first place. In organisations where development follows a RAD approach or where development is chaotic, regression testing is difficult to implement at all - software products may never be stable enough for a regression test to mature and be of value. Usually, the cost of developing and maintaining automated tests exceeds the value of finding regression errors.

We propose that by adopting a systematic approach to testing GUIs and using tools selectively for specific types of tests, tools can be used to find errors during the early test stages. That is, we can use tools to find errors pro-actively rather than repeating tests that didn’t find bugs first time round to search for regression errors late in a project.

12.7.2 Automating GUI Tests

Throughout the discussion of the various test types in the previous chapter, we have assumed that by designing tests with specific goals in mind, we will be in a better position to make successful choices on whether we automate tests or continue to execute them manually. Based on our experience of preparing automated tests and helping client organisations to implement GUI test running tools we offer some general recommendations concerning GUI test automation below.

Pareto law

We expect 80% of the benefit to derive from the automation of 20% of the tests.

Don’t waste time scripting low volume complex scripts at the expense of high volume simple ones.

Hybrid Approach

Consider using the tools to perform navigation and data entry prior to manual test execution.

Consider using the tool for test running, but perform comparisons manually or ‘off-line’.

Coded scripts

These work best for navigation and checklist-type scripts.

Use where loops and case statements in code leverage simple scripts.

Are relatively easy to maintain as regression tests.

Recorded Scripts

Need to be customised to make repeatable.

Sensitive to changes in the user interface.

Test Integration

§ Automated scripts need to be integrated into some form of test harness.

§ Proprietary test harnesses are usually crude so custom-built harnesses are required.

§ Migrating Manual

§ Test Scripts

§ Manual scripts document automated scripts

§ Delay migration of manual scripts until the software is stable, and then reuse for regression tests.

Non-Functional Tests

§ Any script can be reused for soak tests, but they must exercise the functionality of concern.

§ Tests of interfaces to desktop products and server processes are high on the list of tests to automate.

§ Instrument these scripts to take response time measurements and re-use for performance testing.

§ Following are the test automation regime that fits the GUI test process and Manual versus automated execution presents a rough guideline and provides a broad indication to select tests to automate.

Test Types

· Manual or Automated?

· Checklist testing

· Manual execution of tests of application conventions

Automated execution of tests of object states, menus and standard features

· Navigation

· Automated execution.

· Equivalence Partitioning,

· Boundary Values, Decision

· Tables, State Transition

· Testing

Automated execution of large numbers of simple tests of the same functionality or process e.g. the 256 combinations indicated by a decision table.

Manual execution of low volume or complex tests

· Desktop Integration, C/S

· Communications

· Automated execution of repeated tests of simple transactions

Manual tests of complex interactions

Synchronisation, Manual execution, Soak testing, Compatibility testing, Platform/environment

Automated execution.

12.7.3 Criteria for the Selection of GUI tool

· Cross platform availability

· Supporting the underlying test methodology e.g. Bitmap comparison, Record Playback etc.

· Functionality

· Ease of use

· Support for distributed testing

· Style and power of scripting language

· Option to test Script development environment

· Non standard window handling capability

· Availability of technical support

· Low price

12.7.4 Points to be considered while designing GUI test suite:

· Structure the test suite, as far as possible so that no test suite depends on the success of a previous test suite

· Build the capability to recover from errors into verification events

· Start each test case from a known baseline( data state and window state)

· Avoid low level GUI testing methodologies such as bitmap comparison, as these tests may cause false test failures.

12.8 Examples of GUI Tests:

· Test each toolbar and menu item for navigation using the mouse and keyboard.

· Test window navigation using the mouse and keyboard.

· Test to make sure that proper format masks are used.

· For example, all drop-down boxes should be properly sorted. The date entry should also be properly formatted.

· Test to make sure that the colors, fonts, and font widths are to standard for the field prompts and displayed text.

· Test for the colour of the field prompts and field background is to standard in read-only mode.

· Test to make sure that vertical scroll bars or horizontal scrollbars should not appear unless required.

· Test to make sure that the various controls on the window are aligned correctly.

· Test to make sure that the window is resizable.

· Check for the spellings of all the text displayed in the window, such as the window caption, status bar options, field prompts, pop-up text, and error messages.

· Test to make sure that all character or alphanumeric fields are left justified and that the numeric fields are right justified.

· Check for the display of defaults if there are any.

· In case of multiple windows, check to make sure that they all have the same look and feel.

· Check to make sure that all shortcut keys are defined and work correctly.

· Check for the tab order. It should be from top left to bottom right. Also, the read-only/disabled fields should be avoided in the TAB sequence.

· Check to make sure that the cursor is positioned on the first input field when the window is opened.

· Make sure that if any default button is specified, then it should work properly.

· Check for proper functioning of ALT+TAB.

· Assure that each menu command has an alternative hot key sequence and that it works correctly.

· Check that there are no duplicate hot keys defined on the window.

· Validate the behaviour of each control, such as push button, radio button, list box, and so on.

· Test to make sure that the window is modal. This will prevent the user from accessing other functions when this window is active.

· Test to make sure that multiple windows can be opened at the same time.

· Make sure that there is a Help menu.

· Check to make sure that the command buttons are greyed out when not in use.

13 Client / Server Testing

In general, testing of client/server software occurs at three different levels:

i. Individual client applications are tested in a ‘disconnected’ mode, the operation of the server and the underlying network are not considered.

ii. The client software and associated server applications are tested in concert, but network operations are not explicitly exercised.

Iii Complete c/s architecture is tested.

Few common testing approaches are:

Application function tests: the application is tested in stand-alone fashion in an attempt to uncover errors in its operation.

Server tests: the co-ordination and data management functions of the server are tested. Server performance (overall response time and data throughput) is also considered.

Database tests: the accuracy and integrity of data stored by the server is tested. Transactions posted by client applications are examined to ensure that data are properly stored, updated and retrieved. Archiving is also tested.

Transaction tests: a series of tests are created to ensure that each class of transactions is processed according to requirements. Tests focus on the correctness of processing and also on performance issues.

Network communications tests: it verifies that communication among the nodes of the network occurs correctly and that message passing, transactions and related network traffic occur without error. Network security tests may also be conducted as part of these tests.

13.1 Testing Issues

The distributes nature of client/server systems pose a set of unique problems for software testers with the following areas in focus:

· Client GUI considerations

· Target environment and platform diversity considerations

· Distributed database considerations

· Distributed processing considerations

· Non-robust target environment

· Non-linear performance relationships

The strategy and tactics associated with c/s testing must be designed in a manner that allows each of these issues to be addressed.

13.2 C/S Testing Tactics

Object oriented testing techniques can be used even if the system is not implemented with c/s technology. The replicated data and processes can be organised into classes of objects that share the same set of properties. Once test cases have been derived for a class of objects, those test cases should be broadly applicable for all instances of the class. The OO point of is particularly valuable when the GUI of the c/s system is under testing. GUI is inherently object oriented.

Performance of C/S systems is also under test due to the following issues:

· Large volumes of network traffic caused by ‘intelligent clients’

· Increased layers of ‘architectural layers’

· Delays between distributed processes communicating across networks

· The increased number of suppliers of architectural components who must be dealt with.

The execution of a performance test must be automated. The five main tools for the test process are:

· Test database creation / maintenance – create the large volume of data on the database

· Load generation – tools can be of two types, either a test running tool which drives the client application, or a test driver which simulates clients workstations

· Application running tool – test running tool which drives the application under test and records response time measurements

· Resource monitoring – utilities which can monitor and log client and server system resources, network traffic, database activity.

14 Web Testing

While many of the traditional concepts of software testing still hold true, Web and e-Business applications have a different risk profile to other, more mature environments. Gone are the days of measuring release cycles in months or years; instead, Web applications now have release cycles often measured in days or even hours! A typical Web tester now has to deal with shorter release cycles, constantly changing technology, fewer mature testing tools, and an anticipated user base that may run into millions on the first day of a site’s launch.

The most crucial aspect of a Web site testing is the test environment. A Web site testing is challenging. Breaking up the testing tasks based on each of the tiers of the Windows DNA architecture helps to reduce the complexity of the testing task.

14.1 Standards of WEB Testing

This is the first and very important phase of Web testing. It should also be clearly mentioned to your“Test Plan”. Whenever you are going to test a Website just make sure that the Website must follow some standards. The following points must be avoided / may not be present with a Standard Website:

14.1.1 Frames

Splitting a page into frames is very confusing for users since frames break the fundamental user model of the web page. All of a sudden, you cannot bookmark the current page and return to it, URLs stop working, and printouts become difficult. Even worse, the predictability of user actions goes out the door: who knows what information will appear where when you click on a link?

14.1.2 Gratuitous Use of Bleeding-Edge Technology

Don't try to attract users to your site by bragging about use of the latest web technology. The Site may attract a few nerds, but mainstream users will care more about useful content and site’s ability to offer good customer service. Using the latest and greatest before it is even out of beta is a sure way to discourage users: if their system crashes while visiting your site, you can bet that many of them will not be back. Unless you are in the business of selling Internet products or services, it is better to wait until some experience has been gained with respect to the appropriate ways of using new techniques.

14.1.3 Scrolling Text, Marquees & Constantly Running Animations

Never include page elements that move incessantly. Moving images have an overpowering effect on the human peripheral vision. Give your user some peace and quiet to actually read the text!

14.1.4 Long Scrolling Pages

Only 10% of users scroll beyond the information that is visible on the screen when a page comes up. Just test it properly that all-critical content and navigation options should be on the top part of the page.

14.1.5 Complex URLs

It is always found t

0 comments:

Post a Comment