Important Manual Testing Questions

(1) A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements.

(2) A set of activities designed to evaluate the process by which products are developed or manufactured.

What's difference between client/server and Web Application ?

Client/server based is any application architecture where one server application and one or many client applications are involved like your mail server and MS outlook Express, it can be a web application as well, where the Web Application is a kind of client server application that is hosted on the web server and accessed over the internet or interanet. There are lots of things that differs between testing of the two type above and cann't be posted in one post but you can look into the data flow, communication and servside variable like session and security etc

Software Quality Assurance Activities

Application of Technical Methods (Employing proper methods and tools for developing software)

Conduct of Formal Technical Review (FTR)

Testing of Software

Enforcement of Standards (Customer imposed standards or management imposed standards)

Control of Change (Assess the need for change, document the change)

Measurement (Software Metrics to measure the quality, quantifiable)

Records Keeping and Recording (Documentation, reviewed, change control etc. i.e. benefits of docs).

What's the difference between STATIC TESTING and DYNAMIC TESTING?

Answer1:

Dynamic testing: Required program to be executed

static testing: Does not involve program execution

The program is run on some test cases & results of the program’s performance are examined to check whether the program operated as expected

E.g. Compiler task such as Syntax & type checking, symbolic execution, program proving, data flow analysis, control flow analysis

Answer2:

Static Testing: Verification performed with out executing the system code

Dynamic Testing: Verification and validation performed by executing the system code

Software Testing

Software testing is a critical component of the software engineering process. It is an element of software quality assurance and can be described as a process of running a program in such a manner as to uncover any errors. This process, while seen by some as tedious, tiresome and unnecessary, plays a vital role in software development.

Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.

Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.

What's difference between QA/testing

The quality assurance process is a process for providing adequate assurance that the software products and processes in the product life cycle conform to their specific requirements and adhere to their established plans."

The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built

What black box testing types can you tell me about?

Black box testing is functional testing, not based on any knowledge of internal software design or code.

Black box testing is based on requirements and functionality. Functional testing is also a black-box type of testing geared to functional requirements of an application.

System testing is also a black box type of testing. Acceptance testing is also a black box type of testing. Functional testing is also a black box type of testing. Closed box testing is also a black box type of testing. Integration testing is also a black box type of testing.

What is software testing methodology?

One software testing methodology is the use a three step process of...

1. Creating a test strategy;

2. Creating a test plan/design; and

3. Executing tests. This methodology can be used and molded to your organization's needs. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his clients' applications.

What’s the difference between QA and testing?

TESTING means “Quality Control”; and

QUALITY CONTROL measures the quality of a product; while

QUALITY ASSURANCE measures the quality of processes used to create a quality product.

Why Testing CANNOT Ensure Quality

Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.

How to find all the Bugs during first round of Testing?

Answer1:

I understand the problems you are facing. I was involved with a web-based HR system that was encountering the same problems. What I ended up doing was going back over a few release cycles and analyzing the types of defects found and when (in the release cycle including the various testing cycles) they were found. I started to notice a distinct trend in certain areas.

For each defect type, I started looking into the possibility if it could have been caught in the prior phase (lots of things were being found in the Systems test phase that should have been caught earlier). If so, why wasn't it caught? Could it have been caught even earlier (say via a peer review)? If so, why not? This led me to start examining the various processes and found a definite problem with peer reviews (not very thorough IF they were even being done) and with the testing process (not rigorous enough). We worked with the customer and folks doing the testing to start educating them and improving the processes. The result was the number of defects found in the latter test stages (System test for example) were cut by over half! It was getting harder to find problems with the product as they were discovering them earlier in the process -- saving time & money!

Answer2:

There could be several reasons for not catching a showstopper in the first or second build/rev. A found defect could either functionally or physiologically mask a second or third defect. Functionally the thread or path to the second defect could have been boken or rerouted to another path or physiologically the tester who found the first defect knows the app must go back and be rewritten so he/she procedes halfheartedly on and misses the second one. I've seen both cases. It is difficult to keep testing on a known defective app. The testers seem to lose interest knowing that what effort they put in to test it, will have to be redone on the next iteration. This will test your metal as a lead to get them to follow through and maintain a professional attitude.

Answer3:

The best way is to prevent bugs in the first place. Also testing doesn't fix or prevent bugs. It just provides information. Applying this information to your situation is the important part.

The other thing that you may be encountering is that testing tends to be exploratory in nature. You have stated that these are existing bugs, but not stated whether tests already existed for these bugs.

Bugs in early cycles inhibit exploration. Additionally, a tester's understanding of the application and its relationships and interactions will improve with time and thus more 'interesting' bugs tend to be found in later iterations as testers expand their exploration (ie. think of new tests).

No matter how much time you have to read through the documents and inspect artefacts, seeing the actual application is going to trigger new thoughts, and thus introduce previously unthought of tests. Exposure to the application will trigger new thoughts as well, thus the longer your testing goes, the more new tests (and potential bugs) are going to be found. Iterative development is a good way to counter this, as testers get to see something physical earlier, but this issue will always exist to some degree as the passing of time, and exploration of the application allow new tests to be thought of at inconvenient moments.

Is regression testing performed manually?

The answer to this question depends on the initial testing approach. If the initial testing approach was manual testing, then the regression testing is usually performed manually. Conversely, if the initial testing approach was automated testing, then the regression testing is usually performed by automated testing.

How to choose which defect to remove in 1000000 defects? (because It will take too much resources in order to remove them all.)

Answer1:

Are you the programmer who has to fix them, the project manager who has to supervise the programmers, the change control team that decides which areas are too high risk to impact, the stakeholder-user whose organization pays for the damage caused by the defects or the tester?

The tester does not choose which defects to fix.

The tester helps ensure that the people who do choose, make a well-informed choice.

Testers should provide data to indicate the *severity* of bugs, but the project manager or the development team do the prioritization.

When I say "indicate the severity", I don't just mean writing S3 on a piece of paper. Test groups often do follow-up tests to assess how serious a failure is and how broad the range of failure-triggering conditions.

Priority depends on a wide range of factors, including code-change risk, difficulty/time to complete the change, which stakeholders are affected by the bug, the other commitments being handled by the person most knowledgeable about fixing a certain bug, etc. Many of these factors are not within the knowledge of most test groups.

Answer2:

As a tester we don't fix the defects but we surely can prioritize them once detected. In our org we assign severity level to the defects depending upon their influence on other parts of products. If a defect doesnt allow you to go ahead and test test the product, it is critical one so it has to be fixed ASAP. We have 5 levels as

1-critical

2-High

3-Medium

4-Low

5-Cosmetic

Answer3:

Priority/Severity P1 P2 P3

S1

S2

S3

Generally the defects are classified in aboveshown grid. Every organization /software has some target of fixing the bugs.

Example -

P1S1 -> 90% of the bugs reported should be fixed.

P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service packs or versions.

Thus the organization should decide its target and act accordingly.

Basically bugfree software is not possible.

Answer4:

Ideally, the customer should assign priorities to their requirements. They tend to resist this. On a large, multi-year project I just completed, I would often (in the lack of customer guidelines) rely on my knowledge of the application and the potential downstream impacts in the modeled business process to prioritize defects.

If the customer doesn't then I fell the test organization should based on risk or other, similar considerations.

What is software quality?

The quality of the software varies widely from system to system. Some common quality attributes are stability, usability, reliability, portability, and maintainability.

What are the five dimensions of the Risks?

Schedule: Unrealistic schedules, exclusion of certain activities when chalking out a schedule etc. could be deterrents to project delivery on time. Unstable communication link can be considered as a probable risk if testing is carried out from a remote location.

Client: Ambiguous requirements definition, clarifications on issues not being readily available, frequent changes to the requirements etc. could cause chaos during project execution.

Human Resources: Non-availability of sufficient resources with the skill level expected in the project are not available; Attrition of resources - Appropriate training schedules must be planned for resources to balance the knowledge level to be at par with resources quitting. Underestimating the training effort may have an impact in the project delivery.

System Resources: Non-availability of /delay in procuring all critical computer resources either hardware and software tools or licenses for software will have an adverse impact.

Quality: Compound factors like lack of resources along with a tight delivery schedule and frequent changes to requirements will have an impact on the quality of the product tested.

What is good code?

A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.

How do you perform integration testing?

To perform integration testing, first, all unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.

Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, or acceptable, based on client input.

Why back-end testing is required, if we are going to check the front-end. What errros/bugs we are missing out by not doing back-end testing.

Why we need to do unit testing, if all the features are being tested in System testing. What extra things are tested in unit testing, which can not be tested in System testing.

Answer1:

Assume that you're thinking client-server or web. If you test the application on the front end only you can see if the data was stored and retrievd correctly. You can't see if the servers are in an error state or not. many server processes are monitored by another process. If they crash, they are restarted. You can't see that without looking at it.

The data may not be stored correctly either but the front end may have cached data lying around and it will use that instead. The least you should be doing is verifying the data as stored in the database.

It is easier to test data being transferred on the boundaries and see the results of those transactions when you can set the data in a driver.

Answer2:

Back-End testing : Basically the requirement of this testing depends on urproject. like Say if ur project is .Ticket booking system,Front end u will provided with an Interface , where u can book the ticket by giving the appropriate details ( Like Place to go, and Time when u wanna go etc..). It will have a Data storage system (Database or XL sheet etc) which is a Back end for storing details entered by the user.

After submitting the details ,U might have provided with a correct acknowledgement.But in back end , the details might not updated correctly in Database becoz of wrong logic development. Then that will cause a major problem.

and regarding Unit level testing and System testing Unit level testing is for testing the basic checks whether the application is working fyn with the basic requirements.This will be done by developers before delivering to the QA.In System testing , In addition to the unit checks ,u will be performing all the checks ( all possible integrated checks which required) .Basically this will be carried out by tester

Answer3:

Ever heard about divide and conquer tactic ? It is a same method applied in backend and frontend testing.

A good back end test will help minimize the burden of frontend test.

Another point is you can test the backend while develope the frontend. A true pararelism could be achived.

Backend testing has another problem which must addressed before front end could use it. The problem is concurency. Building a scenario to test concurency is formidable task.

A complex thing is hard to test. To create such scenarios will make you unsure which test you already done and which you haven't. What we need is an effective methods to test our application. The simplest method i know is using divide and conquer.

Answer4:

A wide range of errors are hard to see if you don't see the code. For example, there are many optimizations in programs that treat special cases. If you don't see the special case, you don't test the optimization. Also, a substantial portion of most programs is error handling. Most programmers anticipate more errors than most testers.

Programmers find and fix the vast majority of their own bugs. This is cheaper, because there is no communication overhead, faster because there is no delay from tester-reporter to programmer, and more effective because the programmer is likely to fix what she finds, and she is likely to know the cause of the problems she sees. Also, the rapid feedback gives the programmer information about the weaknesses in her programming that can help her write better code.

Many tests -- most boundary tests -- are done at the system level primarily because we don't trust that they were done at the unit level. They are wasteful and tedious at the system level. I'd rather see them properly done and properly automated in a suite of programmer tests.

What is Software “Quality”?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

What is retesting?

Answer1:

Retesting is usually equated with regression testing (see above) but it is different in that is follows a specific fix--such as a bug fix--and is very narrow in focus (as opposed to testing entire application again in a regression test). A product should never be released after any change has been applied to the code, with only retesting of the bug fix, and without a regression test.

Answer2:

1. Re-testing is the testing for a specific bug after it has been fixed.(one given by your definition).

2. Re-testing can be one which is done for a bug which was raised by QA but could not be found or confirmed by Development and has been rejected. So QA does a re-test to make sure the bug still exists and again assigns it back to them.

when entire project is tested & client have some doubts about the quality of testing, Re-Testing can be called. It can also be testing the same application again for better Quality.

Answer3:

Regression Testing is, the selective retesting of a system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. Also referred to as verification testing

It is important to determine whether in a given set of circumstances a particular series of tests has been failed. The supplier may want to submit the software for re-testing. The contract should deal with the parameters for retests, including (1) will test program which are doomed to failure be allowed to finish early, or must they be completed in their entirety? (2) when can, or must, the supplier submit his software for retesting?, and (3) how many times can the supplier fail tests and submit software for retesting ñ is this based on time spent, or the number of attempts? A well drawn contract will grant the customer options in the event of failure of acceptance tests, and these options may vary depending on how many attempts the supplier has made to achieve acceptance.

So the conclusion is retesting is more or less regression testing. More appropriately retesting is a part of regression testing.

Answer4:

Re-testing is simply executing the test plan another time. The client may request a re-test for any reason - most likely is that the testers did not properly execute the scripts, poor documentation of test results, or the client may not be comfortable with the results.

I've performed re-tests when the developer inserted unauthorized code changes, or did not document changes.

Regression testing is the execution of test cases "not impacted" by the specific project. I am currently working on testing of a system with poor system documentation (and no user documentation) so our regression testing must be extensive.

Answer5:

* QA gets a bug fix, and has to verify that the bug is fixed. You might want to check a few things that are a “gut feel” if you want to and get away by calling it retesting, but not the entire function / module / product. * Development Refuses a bug on the basis of it being “Non Reproducible”, then retesting, preferably in the presence of the Developer, is needed.

0 comments:

Post a Comment