How to Implement QA Process ?

How to establish QA Process in an organization?

1.CURRENT SITUATION

The first thing you should do is to put what you currently do in a piece of paper in some sort of a flowchart diagram. This will allow you to analyze what is being currently done.


2.DEVELOPMENT PROCESS STAGE
Once you have the "big picture", you have to be aware of the current status of your development project or projects. The processes you select will vary depending if you are in early stages of developing a new application (i.e.: developing a version 1.0), or maintaining an existing application (i.e.: working on release 6.7.1).


3. PRIORITIES
The next thing you need to do is identify the priorities of your project, for example: - Compliance with industry standards - Validation of new functionality (new GUIs, etc) - Security - Capacity Planning ( You should see "Effective Methods for Software Testing" for more info). Make a list of the priorities, and then assign them values of (H)igh, (M)edium and (L)ow.


4. TESTING TYPES
Once you are aware of the priorities, focus on the High first, then Medium, and finally evaluate whether the Low ones need immediate attention.
Based on this, you need to select those Testing Types that will provide coverage for your priorities. Example of testing types:
- Functional Testing
- Integration Testing
- System Testing
- System-to-System Testing (for testing interfaces)
- Regression Testing
- Load Testing
- Performance Testing
- Stress Testing
Etc.


5. WRITE A TEST PLAN
Once you have determined your needs, the simplest way to document and implement your process is to elaborate a "Test Plan" for every effort that you are engaged into (i.e.: for every release).
For this you can use generic Test Plan templates available in the web that will help you brainstorm and define the scope of your testing:
- Scope of Testing (defects, functionality, and what will be and will not be tested).
- Testing Types (Functional, Regression, etc).
- Responsible people
- Requirements traceability matrix (match test cases with requirements to ensure coverage)
- Defect tracking
- Test Cases


DURING AND POST-TESTING ACTIVITIES
Make sure you keep track of the completion of your testing activities, the defects found, and that you comply with an exit criteria prior to moving to the next stage in testing (i.e. User Acceptance Testing, then Production Release).
Make sure you have a mechanism for:
- Reporting
- Test tracking


What is software testing?

1) Software testing is a process that identifies the correctness, completenes, and quality of software. Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are no defects.
2) It is a systematic analysis of the software to see whether it has performed to specified requirements. What software testing does is to uncover errors however it does not tell us that errors are still not present.

Any recommendation for estimation how many bugs the customer will find till gold release?

Answer1:
If you take the total number of bugs in the application and subtract the number of bugs you found, the difference will be the maximum number of bugs the customer can find.
Seriously, I doubt you will find any sort of calculations or formula that can answer your question with much accuracy. If you could refernce a previous application release, it might give you a rough idea. The best thing to do is insure your test coverage is as good as you can make it then hope you've found the ones the customer might find.
Remember Software testing is Risk Management!

Answer2:
For doing estimation :
1.)Find out the Coverage during testing of ur software and then estimate keeping in mind 80-20 principle.
2.)You can also look at the deepening of your test cases e.g. how much unit level testing and how much life cycle teting have you performed (Believe that most of the bugs from customer comes due to real use of lifecycle in the software)
3.)You can also refer the defect density from earlier releases of the same product line.
by doing these evaluation you can find out the probability of bugs at an approximately optimum estimation.

Answer3:
You can look at the customer issues mapping from previous release (If you have the same product line) to the current release ,This is the best way of finding estimation for gold release of migration of any product.Secondly, till gold release most of the issues comes from various combination of installation testing like cross-platform,i18 issues,Customization,upgradation and migration.
So ,these can be taken as a parameter and then can estimation be completed.

When the build comes to the QA team, what are the parameters to be taken for consideration to reject the build upfront without committing for testing ?

Answer1:
Agree with R&D a set of tests that if one fails you can reject the build. I usually have some build verification tests that just make sure the build is stable and the major functionality is working.
Then if one test fails you can reject the build.

Answer2:
The only way to legitimately reject a build is if the entrance criteria have not been met. That means that the entrance criteria to the test phase have been defined and agreed upon up front. This should be standard for all builds for all products. Entrance criteria could include:
- Turn-over documentation is complete
- All unit testing has been successfully completed and U/T cases are documented in turn-over
- All expected software components have been turned-over (staged)
- All walkthroughs and inspections are complete
- Change requests have been updated to correct status
- Configuration Management and build information is provided, and correct, in turn-over
The only way we could really reject a build without any testing, would be a failure of the turn-over procedure. There may, but shouldn't be, politics involved. The only way the test phase can proceed is for the test team to have all components required to perform successful testing. You will have to define entrance (and exit) criteria for each phase of the SDLC. This is an effort to be taken together by the whole development team. Developments entrance criteria would include signed requirements, HLD doc, etc. Having this criteria pre-established sets everyone up for success

Answer3:
The primary reason to reject a build is that it is untestable, or if the testing would be considered invalid.
For example, suppose someone gave you a "bad build" in which several of the wrong files had been loaded. Once you know it contains the wrong versions, most groups think there is no point continuing testing of that build.
Every reason for rejecting a build beyond this is reached by agreement. For example, if you set a build verification test and the program fails it, the agreement in your company might be to reject the program from testing. Some BVTs are designed to include relatively few tests, and those of core functionality. Failure of any of these tests might reflect fundamental instability. However, several test groups include a lot of additional tests, and failure of these might not be grounds for rejecting a build.
In some companies, there are firm entry criteria to testing. Many companies pay lipservice to entry criteria but start testing the code whether the entry criteria are met or not. Neither of these is right or wrong--it's the culture of the company. Be sure of your corporate culture before rejecting a build.

Answer4:
Generally a company would have set some sort of minimum goals/criteria that a build needs to satisfy - if it satisfies this - it can be accepted else it has to be rejected
For eg.
Nil - high priority bugs
2 - Medium Priority bugs
Sanity test or Minimum acceptance and Basic acceptance should pass The reasons for the new build - say a change to a specific case - this should pass Not able to proceed - non - testability or even some more which is in relation to the new build or the product If the above criterias don't pass then the build could be rejected.

What is software testing?

Software testing is more than just error detection;
Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually wanted.
Verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements. [Verification: Are we building the system right?]
Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should.
Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted. [Validation: Are we building the right system?]
In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly. Both verification and validation are necessary, but different components of any testing activity.

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.

Remember: The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed.

What is the testing lifecycle?

There is no standard, but it consists of:
Test Planning (Test Strategy, Test Plan(s), Test Bed Creation)
Test Development (Test Procedures, Test Scenarios, Test Cases)
Test Execution
Result Analysis (compare Expected to Actual results)
Defect Tracking
Reporting

How to validate data?

I assume that you are doing ETL (extract, transform, load) and cleaning. If my assumetion is correct then
1. you are builing data warehouse/ data minning
2. you ask right question to wrong place


What is quality?

Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization's management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.

0 comments:

Post a Comment