Difference between top down and bottom up design?,data validity and data integrity?,version and release?build and release?

If you don't have requirements specification, how will you go about testing the application?

Answer1:
if there is no requirement specification and testing is required then smoke testing or Gorilla testing is a best option for it in this way we can understand the functionality and Bugs of the application

Answer2:
As a thumb rude, never test or signoff on undocumented (applications without complete functional specifications) applications. Its quite similar to swiming in unknown waters - you never know what you could encounter. In the case of software testing, its not what you will encounter, but its what you will not encounter. There is a very high possibility that you could completely miss out some functionality or even worse, misunderstand the functionality.
Software Testing is closely associated with the Program management Team or the requirment analysis team rather than the Development Team. When you test an application without the knowledge of the requirments, you only see what the developer wants you to see and not what the customer want to see. And customers / end users are our prime audience.
In the case of missing requirments, you would try out something what is called 'Focused Exploratory Testing', identifying every piece in the application, its functionality and gradually dig deeper.
Smoke Testing or Gorilla Testing (Monkey Testing) is a different type of testing and the purpose of same is very diffferent.
Smoke Testing or Sanity Testing is used, only to certify builds and is no measure for quality. It only ensures that there are no blocking issues in the build and ensures that the same can undergo a test pass.
Gorilla Testing or Monkey Testing (Gorilla being the smarter among the Monkey kind) is all about adhoc testing. You would probably try hitting the 'ENTER' key 100 times, or try a 'SUBMIT' followed by 'CANCEL' followed by 'SUBMIT' again.
The idea of 'Exploratory Testing' is to identify the functionality of the application along with Testing the same.

How can I start my career in automated testing?

To start your career in automated testing:
1. Read all you can, and that includes reading product descriptions, pamphlets, manuals, books, information on the Internet, and whatever information you can lay your hands on.
2. Get some hands on experience in using automated testing tools. e.g. WinRunner and many other automated testing tools.

What should test in BANKING DOMAIN application ?

You would like to test:
Banking Workflows
Data Integrity issues
Security and access issues
Recovery testing
All the above needs to be tested in the expected banking environment (hardwares, LAN, Op Sys, domain configurations, databases)

Collaboration between dev and testing

Unit testing is entirely the responsibility of the developer. A tester is not in as knowledgable a position to write meaningful unit tests as the developer who did the feature. I would push back hard against a development team that tried to do this. A feature is not 'done' (as in 'ready for test') until
- the requirements are met (be they specifications or use case)
- the code has all be checked into revision control
- it has been verified that the newly checked in code does not break the compile/existing tests
- a comprehensive suite of feature specific unit tests has been created and integrated into the build process
- there are no TODO's (or similar watermarks) left in the new code
- all FIXME's (or similar watermarks) have bug numbers assigned to them in the new code

Is there any common testing framework or testing best practices for distributed system? For example, for distrbuted database management system?

A distributed database management based on mysql. It has three components.
1. A jdbc driver providing services for user's applications, including distributed transaction management, load balancing, query processor, table id management, etc.
2. A master process, which manages global dirstributed transaction id, load balancing, load balancing strategy, etc.
3. An agent running on the same box with mysql, which get mysql server's balance statistic info.
AN OPERATIONAL ENVIRONMENT FOR TESTING DISTRIBUTED SOFTWARE
Distributed applications have traditionally been designed as systems whose data and processing capabilities reside on multiple platforms, each performing an assigned function within a known and controlled framework contained in the enterprise. Even if the testing tools were capable of debugging all types of software components, most do not provide a single monitoring view that can span multiple platforms. Therefore, developers must jump between several testing/monitoring sessions across the distributed platforms and interpret the cross–platform gap as best they can. That is, of course, assuming that comparable monitoring tools exist for all the required platforms in the first place. This is particularly difficult when one server platform is the mainframe as generally the more sophisticated mainframe testing tools do not have comparable PC– or Unix–based counterparts. Therefore, testing distributed applications is exponentially more difficult than testing standalone applications.
To overcome this problem, we present an operational environment for testing distributed applications based on the Java Development Kit (JDK) as shown in Figure 1, allowing testers to track the flow of messages and data across and within the disparate platforms.The primary goal of this operational environment is an attempt to provide a coherent, seamless environment that can serve as a single platform for testing distributed applications. The hardware platform of the testbed at the lowest level in Figure 1, is a network of SUN workstations running the Solaris 2.x operating system which often plays a part in distributed and client–server system. The widespread use of PCs has also prompted an ongoing effort to port the environment to the PC/Windows platform. On the top of the hardware platform is Java Development Kit. It consists of the Java programming language core functionality, the Java Application Programming Interface (API) with multiple package sets and the essential tools such as Remote Method Invocations (RMI), Java DataBase Conncetivity (JDBC) and Beans for creating Java applications. On top of this platform is the SITE which secures automated support for the testing process, including modeling, specification, statistical analysis, test data generation, test results inspection and test path tracing. At the top of this environment are the distributed applications. These can use or bypass any of the facilities and services in this operational environment. This environment receives commands from the users (testers) and produces the test reports back.

Why is that my company requires a PDR?

Your company requires a PDR, because your company wants to be the owner of the very best possible design and documentation. Your company requires a PDR, because when you organize a PDR, you invite, assemble and encourage the company's best experts to voice their concerns as to what should or should not go into your design and documentation, and why.
Please don't be negative. Please do not assume your company is finding fault with your work, or distrusting you in any way. Remember, PDRs are not about you, but about design and documentation. There is a 90+ per cent probability your company wants you, likes you and trust you because you're a specialist, and because your company hired you after a long and careful selection process.
Your company requires a PDR, because PDRs are useful and constructive. Just about everyone - even corporate chief executive officers (CEOs) - attend PDRs from time to time. When a corporate CEO attends a PDR, he has to listen for "feedback" from shareholders. When a CEO attends a PDR, the meeting is called the "annual shareholders' meeting".

A list of ten good things about PDRs!

Number 1: PDRs are easy, because all your meeting attendees are your co-workers and friends.
Number 2: PDRs do produce results. With the help of your meeting attendees, PDRs help you produce better designs and better documents than the ones you could come up with, without the help of your meeting attendees.
Number 3: Preparation for PDRs helps a lot, but, in the worst case, if you had no time to read every page of every document, it's still OK for you to show up at the PDR.
Number 4: It's technical expertise that counts the most, but many times you can influence your group just as much, or even more so, if you're dominant or have good acting skills.
Number 5: PDRs are easy, because, even at the best and biggest companies, you can dominate the meeting by being either very negative, or very bright and wise.
Number 6: It is easy to deliver gentle suggestions and constructive criticism. The brightest and wisest meeting attendees are usually gentle on you; they deliver gentle suggestions that are constructive, not destructive.
Number 7: You get many-many chances to express your ideas, every time a meeting attendee asks you to justify why you wrote what you wrote.
Number 8: PDRs are effective, because there is no need to wait for anything or anyone; because the attendees make decisions quickly (as to what errors are in your document). There is no confusion either, because all the group's recommendations are clearly written down for you by the PDR's facilitator.
Number 9: Your work goes faster, because the group itself is an independent decision making authority. Your work gets done faster, because the group's decisions are subject to neither oversight nor supervision.
Number 10: At PDRs, your meeting attendees are the very best experts anyone can find, and they work for you, for FREE!

What is the best way to simulate the real behavior of a web based system?

It may seem obvious, but the best way to simulate real behavior of a web based system is to simulate user actual behavior, and the way to do this is from an actual browser with test functionality built inside.
The key to achieving the kind of test accuracy that eValid provides is to understand that it's the eValid browser that is doing the the actual navigating and processing. And, it is the eValid browser that is taking the actual performance timing measurements.
eValid employs IE-equivalent multi-threaded HTTP/S processing and uses IE-equivalent page rendering. While there is some overhead with injecting actions into the browser, it is very, very low. eValid's timers resolve to 1.0 msec and this precision is usually enough to produce very meaningful performance testing results.

How can I shift my focus and area of work from QC to QA?

Number one: Focus on your strengths, skills, and abilities! Realize that there are MANY similarities between Quality Control and Quality Assurance! Realize you have MANY transferable skills!
Number two: Make a plan! Develop a belief that getting a job in QA is easy! HR professionals cannot tell the difference between quality control and quality assurance! HR professionals tend to respond to keywords (i.e. QC and QA), without knowing the exact meaning of those keywords!
Number three: Make it a reality! Invest your time! Get some hands-on experience! Do some QA work! Do any QA work, even if, for a few months, you get paid a little less than usual! Your goals, beliefs, enthusiasm, and action will make a huge difference in your life!
Number four: Read all you can, and that includes reading product pamphlets, manuals, books, information on the Internet, and whatever information you can lay your hands on!

How to use loadrunner for testing web-based application?

Exactly the data which a user of your site will enter as soon as it will be in an e-commerce website.
Check your concept by implementing a simple test case, e.g.
logon - some info - logoff
stress your site with this simple script and n parallel virtual users
(n = 1, 10, 100, 1000, 10000)
create some more complex tests and repeat.

Any server setup that is not needed in order to use Loadrunner for an e-commerce website. From the server point of view, it is just if many real users would stress your site.

What techniques and tools can enable me to migrate from QC to QA?

Technique number one: Mental preparation. Understand and believe what you want is not unusual at all! Develop a belief in yourself! Start believing what you want is attainable! You can change your career! Every year, millions of men and women change their careers successfully!
Number two: Make a plan! Develop a belief that getting a job in QA is easy! HR professionals cannot tell the difference between quality control and quality assurance! HR professionals tend to respond to keywords (i.e. QC and QA), without knowing the exact meaning of those keywords!
Number three: Make it a reality! Invest your time! Get some hands-on experience! Do some QA work! Do any QA work, even if, for a few months, you get paid a little less than usual! Your goals, beliefs, enthusiasm, and action will make a huge difference in your life!
Number four: Read all you can, and that includes reading product pamphlets, manuals, books, information on the Internet, and whatever information you can lay your hands on!

What is the BEST WAY to write test cases?

Answer1:
1) List down usecases (taken from business cases) from function specs. For each use case write a test case and categorize them into sanity tests, functionality, GUI, performance etc. Then for each test case, write its workflow.
2) For a GUI application - make a list of all GUI controls. For each control start writing test cases for testing of the control UI, functionality (impact on the whole application), negative testing (for incorrect inputs), performance etc.

Answer2:
1. Generate Sunny day scenarios based on use cases and/or requirements.
2. Generate Rainy Day (negative, boundary, etc.) tests that correspond to the previously defined Sunny Day scenarios.
3. Based on past experience and a knowledge of the product, generate tests for anything that might have been missed in steps one and two above. These tests need not correspond to any documented requirements or use cases. It's generally not possible to test every facet of the design, but with a little work and forethought you can test the high risk areas or high impact features.

What is the difference between build and release?

Builds and releases are similar, because both builds and releases are end products of software development processes. Builds and releases are similar, because both builds and releases help developers and QA teams to deliver reliable software.
A build is a version of a software; typically one that is still in testing. A version number is usually given to a released product, but sometimes a build number is used instead.
Difference number one: "Build" refers to software that is still in testing, but "release" refers to software that is usually no longer in testing.
Difference number two: "Builds" occur more frequently; "releases" occur less frequently.
Difference number three: "Versions" are based on "builds", and not vice versa. Builds (or a series of builds) are generated first, as often as one build per every morning (depending on the company), and then every release is based on a build (or several builds), i.e. the accumulated code of several builds.

How to test the memory leakeage manually?

Answer1:
Here are tools to check this. Compuware DevPartner can help you test your application for Memory leaks if the application is complex. Also depending upon the OS on which you need to check for memory leaks you need to select the tool.

Answer2:
Tools are more effective to do so. the tools watch to see when memory is allocated and not freeed. You can use various tools manually to see if the same happens. You just won't be able to find the exact points where this happens.
In windows you would use task manager or process explorer (freeware from Sysinternals) and switch to process view and watch memory used. Record the baseline memory usage (BL) . Run an action once and record the memory usage (BLU). Perform the same actions repeatedlty and then if the memory usage has not returned to at least BLU, you have a memory leak. The trick is to wait for the computer to clean up after the transactions have finished. This should take a few seconds.

What is the difference between version and release?

Both version and release indicate particular points in the software development life cycle, or in the life cycle of a document. Both terms, version and release, are similar, i.e. pretty much the same thing, but there are minor differences between them.
Minor difference number 1: Version means a variation of an earlier or original type. For example, you might say, "I've downloaded the latest version of XYZ software from the Internet. The version number of this software is _____"
Minor difference number 2: Release is the act or instance of issuing something for publication, use, or distribution. Release means something thus released. For example, "Microsoft has just released their brand new gaming software known as _______"

How to write Test cases for Login screen?

the format for all test cases could be something like this
1. test cases for GUI
2. +ve test cases for login.
3. -ve test cases for login.
in the -ve scenario :- we should include boundary analysis to create test cases ,Value Analysis. Equivalence Classes,Positive and Negative test cases) plus cross-site scripting and SQL injection. SQL injection is especially high-risk for login pages.
( Test case is Enter special char for username and it should displays a message that username should have char a-z and 0-9 )

What is the checklist for credit card testing?

In credit card testing the following validations are considered
1)Testing the 4-DBC (Digit batch code) for its uniqueness (present on right corner of credit card)
2)The message formats in which the data is sent
3)LUHN testing
4)Network response
5) Terminal validations

How do you test data integrity?

Data integrity is tested by the following tests:
Verify that you can create, modify, and delete any data in tables.
Verify that sets of radio buttons represent fixed sets of values.
Verify that a blank value can be retrieved from the database.
Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur.
Verify that the default values are saved in the database, if the user input is not specified.
Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software.
Why do we perform data integrity testing? Because we want to verify the completeness, soundness, and wholeness of the stored data. Testing should be performed on a regular basis, because important data could, can, and will change over time.

What is the difference between data validity and data integrity?

Difference number one: Data validity is about the correctness and reasonableness of data, while data integrity is about the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data.
Difference number two: Data validity errors are more common, and data integrity errors are less common.
Difference number three: Errors in data validity are caused by human beings - usually data entry personnel - who enter, for example, 13/25/2010, by mistake, while errors in data integrity are caused by bugs in computer programs that, for example, cause the overwriting of some of the data in the database, when somebody attempts to retrieve a blank value from the database.

What is the difference between static and dynamic testing?

Difference number 1: Static testing is about prevention, dynamic testing is about cure.
Difference number 2: The static tools offer greater marginal benefits.
Difference number 3: Static testing is many times more cost-effective than dynamic testing.
Difference number 4: Static testing beats dynamic testing by a wide margin.
Difference number 5: Static testing is more effective!
Difference number 6: Static testing gives you comprehensive diagnostics for your code.
Difference number 7: Static testing achieves 100% statement coverage in a relatively short time, while dynamic testing often often achieves less than 50% statement coverage, because dynamic testing finds bugs only in parts of the code that are actually executed.
Difference number 8: Dynamic testing usually takes longer than static testing. Dynamic testing may involve running several test cases, each of which may take longer than compilation.
Difference number 9: Dynamic testing finds fewer bugs than static testing.
Difference number 10: Static testing can be done before compilation, while dynamic testing can take place only after compilation and linking.
Difference number 11: Static testing can find all of the followings that dynamic testing cannot find: syntax errors, code that is hard to maintain, code that is hard to test, code that does not conform to coding standards, and ANSI violations.

What testing tools should we use?

Ideally, you should use both static and dynamic testing tools. To maximize software reliability, you should use both static and dynamic techniques, supported by appropriate static and dynamic testing tools.
Reason number 1: Static and dynamic testing are complementary. Static and dynamic testing find different classes of bugs. Some bugs are detectable only by static testing, some only by dynamic.
Reason number 2: Dynamic testing does detect some errors that static testing misses. To eliminate as many errors as possible, both static and dynamic testing should be used.
Reason number 3: All this static testing (i.e. testing for syntax errors, testing for code that is hard to maintain, testing for code that is hard to test, testing for code that does not conform to coding standards, and testing for ANSI violations) takes place before compilation.
Reason number 4: Static testing takes roughly as long as compilation and checks every statement you have written.

Why should I use static testing techniques?

There are several reasons why one should use static testing techniques.
Reason number 1: One should use static testing techniques because static testing is a bargain, compared to dynamic testing.
Reason number 2: Static testing is up to 100 times more effective. Even in selective testing, static testing may be up to 10 times more effective. The most pessimistic estimates suggest a factor of 4.
Reason number 3: Since static testing is faster and achieves 100% coverage, the unit cost of detecting these bugs by static testing is many times lower than detecting bugs by dynamic testing.
Reason number 4: About half of the bugs, detectable by dynamic testing, can be detected earlier by static testing.
Reason number 5: If one uses neither static nor dynamic test tools, the static tools offer greater marginal benefits.
Reason number 6: If an urgent deadline looms on the horizon, the use of dynamic testing tools can be omitted, but tool-supported static testing should never be omitted.

What is the definiton of top down design?

Top down design progresses from simple design to detailed design. Top down design solves problems by breaking them down into smaller, easier to solve subproblems. Top down design creates solutions to these smaller problems, and then tests them using test drivers. In other words, top down design starts the design process with the main module or system, then progresses down to lower level modules and subsystems. To put it differently, top down design looks at the whole system, and then explodes it into subsystems, or smaller parts. A systems engineer or systems analyst determines what the top level objectives are, and how they can be met. He then divides the system into subsystems, i.e. breaks the whole system into logical, manageable-size modules, and deals with them individually.

What is the future of software QA/testing?

In software QA/testing, employers increasingly want us to have a combination of technical, business, and personal skills. By technical skills they mean skills in IT, quantitative analysis, data modeling, and technical writing. By business skills they mean skills in strategy and business writing. By personal skills they mean personal communication, leadership, teamwork, and problem-solving skills. We, employees, on the other hand, want increasingly more autonomy, better lifestyle, increasingly more employee oriented company culture, and better geographic location. We continue to enjoy relatively good job security and, depending on the business cycle, many job opportunities. We realize our skills are important, and have strong incentives to upgrade our skills, although sometimes lack the information on how to do so. Educational institutions increasingly ensure that we are exposed to real-life situations and problems, but high turnover rates and a rapid pace of change in the IT industry often act as strong disincentives for employers to invest in our skills, especially non-company specific skills. Employers continue to establish closer links with educational institutions, both through in-house education programs and human resources. The share of IT workers with IT degrees keeps increasing. Certification continues to keep helping employers to quickly identify us with the latest skills. During boom times, smaller and younger companies continue to be the most attractive to us, especially those that offer stock options and performance bonuses in order to retain and attract those of us who are the most skilled. High turnover rates continue to be the norm, especially during economic boom. Software QA/testing continues to be outsourced to offshore locations. Software QA/testing continues to be performed by mostly men, but the share of women keeps increasing.

How can I be effective and efficient, when I'm testing e-commerce web sites?

When you're doing black box testing of an e-commerce web site, you're most efficient and effective when you're testing the site's visual appeal, content, and home page. When you want to be effective and efficient, you need to verify that the site is well planned; verify that the site is customer-friendly; verify that the choices of colors are attractive; verify that the choices of fonts are attractive; verify that the site's audio is customer friendly; verify that the site's video is attractive; verify that the choice of graphics is attractive; verify that every page of the site is displayed properly on all the popular browsers; verify the authenticity of facts; ensure the site provides reliable and consistent information; test the site for appearance; test the site for grammatical and spelling errors; test the site for visual appeal, choice of browsers, consistency of font size, download time, broken links, missing links, incorrect links, and browser compatibility; test each toolbar, each menu item, every window, every field prompt, every pop-up text, and every error message; test every page of the site for left and right justifications, every shortcut key, each control, each push button, every radio button, and each item on every drop-down menu; test each list box, and each help menu item. Also check, if the command buttons are grayed out when they're not in use.

What is the difference between top down and bottom up design?

Top down design proceeds from the abstract entity to get to the concrete design. Bottom up design proceeds from the concrete design to get to the abstract entity.
Top down design is most often used in designing brand new systems, while bottom up design is sometimes used when one is reverse engineering a design; i.e. when one is trying to figure out what somebody else designed in an existing system.
Bottom up design begins the design with the lowest level modules or subsystems, and progresses upward to the main program, module, or subsystem. With bottom up design, a structure chart is necessary to determine the order of execution, and the development of drivers is necessary to complete the bottom up approach.
Top down design, on the other hand, begins the design with the main or top-level module, and progresses downward to the lowest level modules or subsystems.
Real life sometimes is a combination of top down design and bottom up design. For instance, data modeling sessions tend to be iterative, bouncing back and forth between top down and bottom up modes, as the need arises.

Give me one test case that catches all the bugs!

On the negative side, if there was a "magic bullet", i.e. the one test case that was able to catch ALL the bugs, or at least the most important bugs, it'd be a challenge to find it, because test cases depend on requirements; requirements depend on what customers need; and customers have great many different needs that keep changing. As software systems are changing and getting increasingly complex, it is increasingly more challenging to write test cases.
On the positive side, there are ways to create "minimal test cases" which can greatly simplify the test steps to be executed. But, writing such test cases is time consuming, and project deadlines often prevent us from going that route. Often the lack of enough time for testing is the reason for bugs to occur in the field.
However, even with ample time to catch the "most important bugs", bugs still surface with amazing spontaneity. The fundamental challenge is, developers do not seem to know how to avoid providing the many opportunities for bugs to hide, and testers do not seem to know where the bugs are hiding.

What testing approaches can you tell me about?

Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.

Can you give me five common problems?

Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.
Requirements are poorly written when they're unclear, incomplete, too general, or not testable; therefore there will be problems.
The schedule is unrealistic if too much work is crammed in too little time.
Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.
It's extremely common that new features are added after development is underway.
Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.

Can you give me five common solutions?

Solid requirements, realistic schedules, adequate testing, firm requirements, and good communication.
Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.
Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.
Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.
Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences.
If changes are necessary, ensure they're adequately reflected in related schedule changes. Use prototypes early on so customers' expectations are clarified and customers can see what to expect; this will minimize changes later on.
Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.

What if the application has functionality that wasn't in the requirements?

It can take a serious effort to determine if an application has significant unexpected or hidden functionality, which can indicate deeper problems in the software development process.
If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the unexpected functionality only affects areas, e.g. minor improvements in user interface, then it may not be a significant risk.

How can software QA processes be implemented without stifling productivity?

When you implement software QA processes without stifling productivity, you want to implement yhem slowly over time. You want to use consensus to reach agreement on processes, and adjust, and experiment, as an organization grows and matures.
Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease, and there will be improved focus, and less wasted effort.
At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process.
However, no one, especially not the talented technical types like bureaucracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.

Should I take a course in manual testing?

Yes, you want to consider taking a course in manual testing. Why? Because learning how to perform manual testing is an important part of one's education. Unless you have a significant personal reason for not taking a course, you do not want to skip an important part of an academic program.

To learn to use WinRunner, should I sign up for a course at a nearby educational institution?

Free, or inexpensive, education is often provided on the job, by an employer, while one is getting paid to do a job that requires the use of WinRunner and many other software testing tools.
In lieu of a job, it is often a good idea to sign up for courses at nearby educational institutes. Classes, especially non-degree courses in community colleges, tend to be inexpensive.

Test Specifications

The test case specifications should be developed from the test plan and are the second phase of the test development life cycle. The test specification should explain "how" to implement the test cases described in the test plan.
Test Specification Items
Each test specification should contain the following items:
Case No.: The test case number should be a three digit identifer of the following form: c.s.t, where: c- is the chapter number, s- is the section number, and t- is the test case number.
Title: is the title of the test.
ProgName: is the program name containing the test.
Author: is the person who wrote the test specification.
Date: is the date of the last revision to the test case.
Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how to conduct the test.
Expected Error(s): Describes any errors expected
Reference(s): Lists reference documententation used to design the specification.
Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation Under Test (IUT) and the test engine.
Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.
Example Test Specification
Test Specification
Case No. 7.6.3 Title: Invalid Sequence Number (TC)
ProgName: UTEP221 Author: B.C.G. Date: 07/06/2000
Background: (Objectives, Assumptions, References, Success Criteria)

Validate that the IUT will reject a normal flow PIU with a transmissionheader that has an invalid sequence number.
Expected Sense Code: $2001, Sequence Number Error
Reference - SNA Format and Protocols Appendix G/p. 380
Data: (Tx Data, Predicted Rx Data)
IUT
<-------- DATA FIS, OIC, DR1 SNF=20 <-------- DATA LIS, SNF=20 --------> -RSP $2001

Script: (Pseudo Code for Coding Tests)
SEND_PIU FIS, OIC, DR1, DRI SNF=20
SEND_PIU LIS, SNF=20
R_RSP $2001

Formal Technical Review

Reviews that include walkthroughs, inspection, round-robin reviews and other small group technical assessment of software. It is a planned and control meeting attended by the analyst, programmers and people involve in the software development.

Uncover errors in logic, function or implementation for any representation of software
To verify that the software under review meets the requirements
To ensure that the software has been represented according to predefined standards
To achieve software that is developed in a uniform manner.
To make project more manageable.
Early discovery of software defects, so that in the development and maintenance phase the errors are substantially reduced. " Serves as a training ground, enabling junior members to observe the different approaches in the software development phases (gives them helicopter view of what other are doing when developing the software).
Allows for continuity and backup of the project. This is because a number of people are become familiar with parts of the software that they might not have otherwise seen,
Greater cohesion between different developers.

Reluctance of implementing Software Quality Assurance

Managers are reluctant to incur the extra upfront cost
Such upfront cost are not budgeted in software development therefore management may be unprepared to fork out the money.
Avoid Red - Tape (Bureaucracy)
Red- tape means extra administrative activities that needs to be performed as SQA involves a lot of paper work. New procedures to determine that software quality is correctly implemented needs to be developed, followed through and verified by external auditing bodies. These requirements involves a lot of administrative paperwork.

Benefits of Software Quality Assurance to the organization

Higher reliability will result in greater customer satisfaction: as software development is essentially a business transaction between a customer and developer, customers will naturally tend to patronize the services of the developer again if they are satisfied with the product.

Overall life cycle cost of software reduced.

As software quality is performed to ensure that software is conformance to certain requirements and standards. The maintenance cost of the software is gradually reduced as the software requires less modification after SQA. Maintenance refers to the correction and modification of errors that may be discovered only after implementation of the program. Hence, proper SQA procedures would identify more errors before the software gets released, therefore resulting in the overall reduction of the life cycle cost.

Constraints of Software Quality Assurance

Difficult to institute in small organizations where available resources to perform the necessary activities are not present. A smaller organization tends not to have the required resources like manpower, capital etc to assist in the process of SQA.

Cost not budgeted
In addition, SQA requires the expenditure of dollars that are not otherwise explicitly budgeted to software engineering and software quality. The implementation of SQA involves immediate upfront costs, and the benefits of SQA tend to be more long-term than short-term. Hence, some organizations may be less willing to include the cost of implementing SQA into their budget.

SOFTWARE TESTING METRICS

In general testers must rely on metrics collected in analysis, design and coding stages of the development in order to design, develop and conduct the tests necessary. These generally serve as indicators of overall testing effort needed. High-level design metrics can also help predict the complexities associated with integration testing and the need for specialized testing software (e.g. stubs and drivers). Cyclomatic complexity may yield modules that will require extensive testing as those with high cyclomatic complexity are more likely to be error prone.
Metrics collected from testing, on the other hand, usually comprise of the number and type of errors, failures, bugs and defects found. These can then serve as measures used to calculate further testing effort required. They can also be used as a management tool to determine the extensity of the project's success or failure and the correctness of the design. In any case these should be collected, examined and stored for future needs.

OBJECT ORIENTED TESTING METRICS

Testing metrics can be grouped into two categories: encapsulation and inheritance. Encapsulation
Lack of cohesion in methods (LCOM) - The higher the value of LCOM, the more states have to be tested.
Percent public and protected (PAP) - This number indicates the percentage of class attributes that are public and thus the likelihood of side effects among classes.
Public access to data members (PAD) - This metric shows the number of classes that access other class's attributes and thus violation of encapsulation
Inheritance
Number of root classes (NOR) - A count of distinct class hierarchies.
Fan in (FIN) - FIN > 1 is an indication of multiple inheritance and should be avoided.
Number of children (NOC) and depth of the inheritance tree (DIT) - For each subclass, its superclass has to be re-tested. The above metrics (and others) are different than those used in traditional software testing, however, metrics collected from testing should be the same (i.e. number and type of errors, performance metrics, etc.).

0 comments:

Post a Comment