How can we write a good test case?
Essentially a test case is a document that carries a Test Case ID no, title, type of test being conducted, Input, Action or Event to be performed, expected output and whether the Test Case has achieved the desired Output(Yes \ No)
Basically Test cases are based on the Test Plan, which includes each module and what to be tested in each module. Further each action in the module is further divided into testable components from where the Test Cases are derived.
Since the test case handles a single event at a time normally, as long as it reflects its relation to the Test plan, it can be called as a good test case. It does not matter whether the event passes or fails, as long as the component to be tested is addressed and can be related to in the Test Plan, the Test Case can be called a good Test Case
unique-test-case-id: Test Case Title
Purpose: | Short sentence or two about the aspect of the system is being tested. If this gets too long, break the test case up or put more information into the feature descriptions. |
Prereq: | Assumptions that must be met before the test case can be run. E.g., "logged in", "guest login allowed", "user testuser exists". |
Test Data: | List of variables and their possible values used in the test case. You can list specific values or describe value ranges. The test case should be performed once for each combination of values. These values are written in set notation, one per line. E.g.: loginID = {Valid loginID, invalid loginID, valid email, invalid email, empty} password = {valid, invalid, empty} |
Steps: | Steps to carry out the test. See step formating rules below.
|
Software Testing Portal
Contents
1 Test plans in software development
1.1 Test plan template, based on IEEE 829 format
1.1.1 Test plan identifier
1.1.2 References
1.1.3 Introduction
1.1.4 Test items (functions)
1.1.5 Software risk issues.
1.1.6 Features to be tested
1.1.7 Features not to be tested
1.1.8 Approach (strategy)
1.1.9 Item pass/fail criteria
1.1.10 Entry & exit criteria
1.1.11 Suspension criteria & resumption requirements
1.1.12 Test deliverables
1.1.13 Remaining test tasks
1.1.14 Environmental needs
1.1.15 Staffing and training needs
1.1.16 Responsibilities
1.1.17 Planning risks and contingencies
1.1.18 Approvals
1.1.19 Glossary
1.2 Regional differences
2 Criticism of the overuse of test plans
3 Test plans in hardware development
4 IEEE 829-1998:
5 See also
6 External links
Test plans in software development
In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including
Schedule.
Test Deliverables.
Release Criteria.
Risks and Contingencies.
Test plan template, based on IEEE 829 format
Test Plan Identifier (TPI).
References
Introduction
Test Items
Software Risk Issue
Features to be Tested
Features not to be Tested
Approach
Item Pass/Fail Criteria
Entry & Exit Criteria
Suspension Criteria and Resumption Requirements
Test Deliverables
Remaining Test Tasks
Environmental Needs
Staffing and Training Needs
Responsibilities
Planning Risks and Contingencies
Approvals
Test plan identifier
For example: "Master plan for 3A USB Host Mass Storage Driver TP_3A1.0"
Some type of unique company generated number to identify this test plan, its level and the level of software that it is related to. Preferably the test plan level will be the same as the related software level. The number may also identify whether the test plan is a Master plan, a Level plan, an integration plan or whichever plan level it represents. This is to assist in coordinating software and testware versions within configuration management.
List all documents that support this test plan
System Requirements specifications.
High Level design document.
Detail design document.
Development and Test process standards.
Methodology.
Low level design.
Introduction
State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive summary part of the plan.
Intention of this project has to be included
[edit] Test items (functions)
These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.
Software risk issues.
Identify what software is to be tested and what the critical areas are, such as:
New version of interfacing software.
Ability to use and understand a new package/tool, etc.
Extremely complex functions.
Modifications to components with a past history of failure.
Poorly documented modules or change requests.
There are some inherent software risks such as complexity; these need to be identified.
Safety.
Multiple interfaces.
Impacts on Client.
Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.
Start with ideas, such as, what worries me about this project/application.
Features to be tested
This is a listing of what is to be tested from the user's viewpoint of what the system does. This is not a technical description of the software, but a USER'S view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as High, Medium and Low(H, M, L). These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.
This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
Low risk, has been used before and was considered stable.
Will be released but not tested or documented as a functional part of the release of this version of the software.
Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project.
Approach (strategy)
This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified. It is important to have instruction as to what is necessary in a test plan before trying to create one's own strategy. Make sure that you are apprenticed in this area before trying to teach yourself this important step in engineering.
Are any special tools to be used and what are they?
Will the tool require special training?
What metrics will be collected?
Which level is each metric to be collected at?
How is Configuration Management to be handled?
How many different configurations will be tested?
Hardware
Software
Combinations of HW, SW and other vendor packages
What levels of regression testing will be done and how much at each test level?
Will regression testing be based on severity of defects detected?
How will elements in the requirements and design that do not make sense or are untestable be processed?
If this is a master test plan the overall project testing approach and coverage requirements must also be identified.
Specify if there are special requirements for the testing.
Only the full component will be tested.
A specified segment of grouping of features/components must be tested together.
Other information that may be useful in setting the approach are:
MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.
SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.
How will meetings and other organizational processes be handled?
Item pass/fail criteria
Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed. Show Stopper severity requires definition within each testing context.
Entry & exit criteria
Specify the criteria to be used to start testing and how you know when to stop the testing process.
Suspension criteria & resumption requirements
Suspension criteria specify the criteria to be used to suspend all or a portion of the testing activities while resumption criteria specify when testing can resume after it has been suspended.
Unavailability of external dependent systems during execution.
When a defect is introduced that cannot allow any further testing.
Critical path deadline is missed so that the client will not accept delivery even if all testing is completed.
A specific holiday shuts down both development and testing.
System Integration Testing in the Integration environment may be resumed under the following circumstances:
When the external dependent systems become available again.
When a fix is successfully implemented and the Testing Team is notified to continue testing.
The contract is renegotiated with the client to extend delivery.
The holiday period ends.
Suspension criteria assumes that testing cannot go forward and that going backward is also not possible. A failed build would not suffice as you could generally continue to use the previous build. Most major or critical defects would also not constituted suspension criteria as other areas of the system could continue to be tested.
Test deliverables
List documents, reports, charts, that will be presented to stakeholders on a regular basis during testing and when testing has been completed.
If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects.
If the project is being developed as a multi-party process, this plan may only cover a portion of the total functions/features. This status needs to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that do not relate to this plan.
When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups..
Are there any special requirements for this test plan, such as:
Special hardware such as simulators, static generators etc.
How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided?
How much testing will be done on each component of a multi-part feature?
Special power requirements.
An environment where there is more feedback than needs improvement and meets expectations
Specific versions of other supporting software.
Restricted use of the system during testing.
Staffing and training needs
Training on the application/system.
Training for any test tools to be used.
The Test Items and Responsibilities sections affect this section. What is to be tested and who is responsible for the testing and training.
Responsibilities
Who is in charge?
Selecting features to be tested and not tested.
Setting overall strategy for this level of plan.
Ensuring all required elements are in place for testing.
Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.
Who provides the required training?
Who makes the critical go/no go decisions for items not covered in the test plans?
Who is responsible for this risk.
Planning risks and contingencies
What are the overall risks to the project with an emphasis on the testing process?
Lack of availability of required hardware, software, data or tools.
Late delivery of the software, hardware or tools.
Delays in training on the application and/or tools.
Changes to the original requirements or designs.
Complexities involved in testing the applications
Specify what will be done for various events, for example:
The number of tests performed will be reduced.
The number of acceptable defects will be increased.
These two items could lower the overall quality of the delivered product.
Resources will be added to the test team.
The test team will work overtime (this could affect team morale).
The scope of the plan may be changed.
There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.
Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past.
Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan)?
The levels and type of knowledge at the various levels will be different as well.
Programmers are very technical but may not have a clear understanding of the overall business process driving the project.
Users may have varying levels of business acumen and very little technical skills.
Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand the business process. These types of individuals can cause more harm than good if they do not have the skills they believe they possess.
Used to define terms and acronyms used in the document, and testing in general, to eliminate confusion and promote consistent communications.
There are often localized differences in the use of this term. In some locations, test plan can mean all of the tests that need to be run. Purists would suggest that a collection of tests or test cases is a Test suite.
Software Testing Portal
Contents
1 Test plans in software development
1.1 Test plan template, based on IEEE 829 format
1.1.1 Test plan identifier
1.1.2 References
1.1.3 Introduction
1.1.4 Test items (functions)
1.1.5 Software risk issues.
1.1.6 Features to be tested
1.1.7 Features not to be tested
1.1.8 Approach (strategy)
1.1.9 Item pass/fail criteria
1.1.10 Entry & exit criteria
1.1.11 Suspension criteria & resumption requirements
1.1.12 Test deliverables
1.1.13 Remaining test tasks
1.1.14 Environmental needs
1.1.15 Staffing and training needs
1.1.16 Responsibilities
1.1.17 Planning risks and contingencies
1.1.18 Approvals
1.1.19 Glossary
1.2 Regional differences
2 Criticism of the overuse of test plans
3 Test plans in hardware development
4 IEEE 829-1998:
5 See also
6 External links
[edit] Test plans in software development
In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including
Scope of testing.
Schedule.
Test Deliverables.
Release Criteria.
Risks and Contingencies.
[edit] Test plan template, based on IEEE 829 format
Test Plan Identifier (TPI).
References
Introduction
Test Items
Software Risk Issue
Features to be Tested
Features not to be Tested
Approach
Item Pass/Fail Criteria
Entry & Exit Criteria
Suspension Criteria and Resumption Requirements
Test Deliverables
Remaining Test Tasks
Environmental Needs
Staffing and Training Needs
Responsibilities
Planning Risks and Contingencies
Approvals
Test plan identifier
For example: "Master plan for 3A USB Host Mass Storage Driver TP_3A1.0"
Some type of unique company generated number to identify this test plan, its level and the level of software that it is related to. Preferably the test plan level will be the same as the related software level. The number may also identify whether the test plan is a Master plan, a Level plan, an integration plan or whichever plan level it represents. This is to assist in coordinating software and testware versions within configuration management.
List all documents that support this test plan
Documents that are referenced include:
System Requirements specifications.
High Level design document.
Detail design document.
Development and Test process standards.
Methodology.
Low level design.
Introduction
State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that contain information relevant to this project/process.
These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.
This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build.
Identify what software is to be tested and what the critical areas are, such as:
New version of interfacing software.
Ability to use and understand a new package/tool, etc.
Extremely complex functions.
Modifications to components with a past history of failure.
Poorly documented modules or change requests.
There are some inherent software risks such as complexity; these need to be identified.
Multiple interfaces.
Impacts on Client.
Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.
This is a listing of what is to be tested from the user's viewpoint of what the system does. This is not a technical description of the software, but a USER'S view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as High, Medium and Low(H, M, L). These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.
This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a user's view of the functions.
Low risk, has been used before and was considered stable.
Will be released but not tested or documented as a functional part of the release of this version of the software.
Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project.
Approach (strategy)
This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified. It is important to have instruction as to what is necessary in a test plan before trying to create one's own strategy. Make sure that you are apprenticed in this area before trying to teach yourself this important step in engineering.
Are any special tools to be used and what are they?
Will the tool require special training?
What metrics will be collected?
Which level is each metric to be collected at?
How is Configuration Management to be handled?
How many different configurations will be tested?
Hardware
Software
Combinations of HW, SW and other vendor packages
What levels of regression testing will be done and how much at each test level?
Will regression testing be based on severity of defects detected?
How will elements in the requirements and design that do not make sense or are untestable be processed?
If this is a master test plan the overall project testing approach and coverage requirements must also be identified.
A specified segment of grouping of features/components must be tested together.
Other information that may be useful in setting the approach are:
SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.
How will meetings and other organizational processes be handled?
[edit] Item pass/fail criteria
Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed. Show Stopper severity requires definition within each testing context.
Entry & exit criteria
Specify the criteria to be used to start testing and how you know when to stop the testing process.
Suspension criteria specify the criteria to be used to suspend all or a portion of the testing activities while resumption criteria specify when testing can resume after it has been suspended.
Unavailability of external dependent systems during execution.
When a defect is introduced that cannot allow any further testing.
Critical path deadline is missed so that the client will not accept delivery even if all testing is completed.
A specific holiday shuts down both development and testing.
System Integration Testing in the Integration environment may be resumed under the following circumstances:
When the external dependent systems become available again.
When a fix is successfully implemented and the Testing Team is notified to continue testing.
The contract is renegotiated with the client to extend delivery.
The holiday period ends.
Suspension criteria assumes that testing cannot go forward and that going backward is also not possible. A failed build would not suffice as you could generally continue to use the previous build. Most major or critical defects would also not constituted suspension criteria as other areas of the system could continue to be tested.
List documents, reports, charts, that will be presented to stakeholders on a regular basis during testing and when testing has been completed.
Remaining test tasks
If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects.
When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups..
Environmental needs
Are there any special requirements for this test plan, such as:
Special hardware such as simulators, static generators etc.
How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided?
How much testing will be done on each component of a multi-part feature?
Special power requirements.
An environment where there is more feedback than needs improvement and meets expectations
Specific versions of other supporting software.
Restricted use of the system during testing.
Staffing and training needs
Training on the application/system.
Training for any test tools to be used.
The Test Items and Responsibilities sections affect this section. What is to be tested and who is responsible for the testing and training.
Responsibilities
Who is in charge?
Don't leave people in charge of the test plan who have never done anything resembling a test plan before; This is vital, they will learn nothing from it and the test will fail.
This issue includes all areas of the plan. Here are some examples:
Setting risks.
Selecting features to be tested and not tested.
Setting overall strategy for this level of plan.
Ensuring all required elements are in place for testing.
Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.
Who provides the required training?
Who makes the critical go/no go decisions for items not covered in the test plans?
Who is responsible for this risk.
Planning risks and contingencies
What are the overall risks to the project with an emphasis on the testing process?
Lack of personnel resources when testing is to begin.
Lack of availability of required hardware, software, data or tools.
Late delivery of the software, hardware or tools.
Delays in training on the application and/or tools.
Changes to the original requirements or designs.
Complexities involved in testing the applications
Specify what will be done for various events, for example:
Requirements definition will be complete by January 1, 20XX, and, if the requirements change after that date, the following actions will be taken:
The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as most projects tend to have fixed delivery dates.
The number of tests performed will be reduced.
The number of acceptable defects will be increased.
These two items could lower the overall quality of the delivered product.
Resources will be added to the test team.
The test team will work overtime (this could affect team morale).
The scope of the plan may be changed.
There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.
Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option.
Approvals
Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is:
The audience for a unit test level plan is different from that of an integration, system or master level plan.
The levels and type of knowledge at the various levels will be different as well.
Programmers are very technical but may not have a clear understanding of the overall business process driving the project.
Users may have varying levels of business acumen and very little technical skills.
Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand the business process. These types of individuals can cause more harm than good if they do not have the skills they believe they possess.
Glossary
Used to define terms and acronyms used in the document, and testing in general, to eliminate confusion and promote consistent communications.
Regional differences
There are often localized differences in the use of this term. In some locations, test plan can mean all of the tests that need to be run. Purists would suggest that a collection of tests or test cases is a Test suite.
0 comments:
Post a Comment