What is risk analysis?What if there isn't enough time for thorough testing? What is the role of test engineers,role of a QA engineer?


What is risk analysis? What does it have to do with Severity and Priority?

Risk analysis is a method to determine how much risk is involved in something. In testing, it can be used to determine when to test something or whether to test something at all. Items with higher risk values should be tested early and often. Items with lower risk value can be tested later, or under some circumstances if time runs out, not at all. It can also be used with defects. Severity tells us how bad a defect is: "how much damage can it cause?". Priority tells us how soon it is desired to fix the defect: "should we fix this and if so, by when?".
Companies usually use numeric values to calculate both values. The number of values will change from place to place. I assume a five-point scale but a three-point scale is commonly used. Using a defect as an example, Major would be Severity1 and Trivial would be Severity5. A Priority1 would imply that it needs to be fixed immediately and a Priority5 means that it can wait until everything else is done. You can add or multiply the two digits together (there is only a small difference in the outcome) and the results become the risk value. You use the event's risk value to determine how you should address the problem. The lower values must be addressed before the middle values, and the higher values can wait the longest.
Defect 12345
Foo displays an error message with incorrect path separators when the optional showpath switch is applied
Sev5
Pri5
Risk value (addition method) 10

Defect 13579
Module Bar causes system crash using derefenced handle
Sev1
Pri1
Risk value (addition method) 2

Defect 13579 will usually be addressed before 12345.
Another method for Risk Assessment is based on a military standard, MIL-STD-882. It describes the risk of failure for military hardware. The main area of interest is section A.4.4.3 and its children where they indicate the Assessment of mishap risk. They use a four-point severity rating: Catastrophic; Critical; Marginal; Negligible. They then use a five-point probability rating: Frequent; Probable; Occasional; Remote; Improbable. Then rather than using a mathematical calculation to determine a risk level, they use a predefined chart. It is this chart that is novel as it groups risks together rather than giving them discrete values. If you want a copy of the current version, search for MIL-STD-882D using Yahoo! or Google.

If complicated system with a lots of users' profiles having different rights. Should to write different test cases for each profile? Or Write one test case describing the expected results according to the user's rights? It is logical to be described different expected results for one action?

Answer1:
You will have to write one test case describing the results of various kinds of users. You could write a tabular data form.
For each action you would create a table
First column: user type
Second: expected result
This avoids the issue of writing a series of test cases where 90% of the information is the same and 10% is different. It makes maintaining teh tests easier as well.
And the best way to test your application is to use an automated tool to do it.

Answer2:
Think of things in terms of use cases. Treat it like a completely different system for each user role, and create your own suite of cases for each role.

What if the software is so buggy it can't be tested at all?

In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs.
Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem.

What is API Testing?

An API (Application Programming Interface) is a collection of software functions and procedures, called API calls, that can be executed by other software applications. Application developers code that links to existing APIs to make use of their functionality. This link is seamless and end-users of the application are generally unaware of using a separately developed API.
During testing, a test harness-an application that links the API and methodically exercises its functionality-is constructed to simulate the use of the API by end-user applications. The interesting problems for testers are:
1. Ensuring that the test harness varies parameters of the API calls in ways that verify functionality and expose failures. This includes assigning common parameter values as well as exploring boundary conditions.
2. Generating interesting parameter value combinations for calls with two or more parameters.
3. Determining the content under which an API call is made. This might include setting external environment conditions (files, peripheral devices, and so forth) and also internal stored data that affect the API.
4. Sequencing API calls to vary the order in which the functionality is exercised and to make the API produce useful results from successive calls.

What if there isn't enough time for thorough testing?

Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects.
Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:
* Which functionality is most important to the project's intended purpose?
* Which functionality is most visible to the user?
* Which functionality has the largest safety impact?
* Which functionality has the largest financial impact on users?
* Which aspects of the application are most important to the customer?
* Which aspects of the application can be tested early in the development cycle?
* Which parts of the code are most complex and thus most subject to errors?
* Which parts of the application were developed in rush or panic mode?
* Which aspects of similar/related previous projects caused problems?
* Which aspects of similar/related previous projects had large maintenance expenses?
* Which parts of the requirements and design are unclear or poorly thought out?
* What do the developers think are the highest-risk aspects of the application?
* What kinds of problems would cause the worst publicity?
* What kinds of problems would cause the most customer service complaints?
* What kinds of tests could easily cover multiple functionalities?
* Which tests will have the best high-risk-coverage to time-required ratio?

How to test a module(web based developed in .NET) which would load data from the list(which is text file) into the database(SQL Server). It would touch approx 10 different tables depending on data in the list.
The job is to verify that data which is suppose to get loaded gets loaded correctly. List might contain 60 millions of record. anyone suggest?

* Compare the record counts before and after the load and match with the expected data load * Sample records shoudl be taken to ensure teh data integrity
* Include Test cases where the loaded data is visible functionally through the application. For eg: If the data loads new users to the system, tahn the login fucntionlaity using the new user login creadentials shoudl work etc...
Finally tools available in the market, you can be innovativce in using the Functional Automation tools like Winrunner and adding DB Checkpoints, you can write SQL's to do the Backend testing. It is upon the Test scenario (Test Case) details that you wooudl have to narrow upon the tools/techniques.

What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under "What if there isn't enough time for thorough testing?" do apply. The test engineer then should do "ad hoc" testing, or write up a limited test plan based on the risk analysis.

Is the testing an art of thinking ?

Answer1:
Think like a guy who would like to break the application. like a hacker...finding the weakness in the system.

Answer2:
Think like a Tester then think negative rather than positive. Because tester always try to break the application, by putting some negative values.

Answer3:
How testers think is:
- Testers are "negative" thinkers
- Testers complain
- Testers like to break things
- Testers take a special thrill in delivering bad news
The authors introduce an alternate view:
- Testers don't complain, they offer evidence
- Testers don't like to break things, they like to dispel the illusion that things work
- Testers don't take a special thrill in delivering bad news, they enjoy freeing their clients from false belief.
They go on to explain how testers should think:
- Deriving inference
- Technically
- creatively
- Critically
- practically
- Attempting to anwer questions
- Exploring, thinking
- Using logic

Answer4:
Testers are destroyers for a cretive purpose.Always keep one thing in mind "CREATIVE DESTRUCTION IS WHAT WE WANT TO ACHIEVE".
Add one thing to it is that the quality of testers while testing an application should be enforced only after the smooth flow of the application is assured i.e., the application passes the positive test. If the application doesnt pass even the positive testing than the testing strategy gets waivered.
And aftyer all the competition is appreciated when both the sides are equally strong.
So before bringing the real quality of testers into act while doing the testing one should ensure that it has passed the positive testing.

What is the role of test engineers?

We, test engineers, speed up the work of your development staff, and reduce the risk of your company's legal liability. We give your company the evidence that the software is correct and operates properly. We also improve your problem tracking and reporting. We maximize the value of your software, and the value of the devices that use it. We also assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool, and before your employees get bogged down. We help the work of your software development staff, so your development team can devote its time to build up your product. We also promote continual improvement. We provide documentation required by FDA, FAA, other regulatory agencies, and your customers. We save your company money by discovering defects EARLY in the design process, before failures occur in production, or in the field. We save the reputation of your company by discovering bugs and design flaws, before bugs and design flaws damage the reputation of your company.

What is the role of a QA engineer?

The QA engineer's role is as follows: We, QA engineers, use the system much like real users would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers, and provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality.

0 comments:

Post a Comment