How to insert a check point to a image to check enable property in QTP?
Answer1:
AS you are saying that the all images are as push button than you can check the property enabled or disabled. If you are not able to find that property than go to object repository for that objecct and click on add remove to add the available properties to that object. Let me know if that works. And if you take it as image than you need to check visible or invisible property tht also might help you are there are no enable or disable properties for the image object.
Answer2:
The Image Checkpoint does not have any property to verify the enable/disable property.
One thing you need to check is:
* Find out form the Developer if he is showing different images for activating/deactiving i.e greyed out image. That is the only way a developer can show deactivate/activate if he is using an "image". Else he might be using a button having a headsup with an image.
* If it is a button used to display with the headsup as an image you woudl need to use the object Properties as a checkpoint.
How do you write test cases?
When I write test cases, I concentrate on one requirement at a time. Then, based on that one requirement, I come up with several real life scenarios that are likely to occur in the use of the application by an end user.
When I write test cases, I describe the inputs, action, or event, and their expected results, in order to determine if a feature of an application is working correctly. To make the test case complete, I also add particulars e.g. test case identifiers, test case names, objectives, test conditions (or setups), input data requirements (or steps), and expected results.
Additionally, if I have a choice, I like writing test cases as early as possible in the development life cycle. Why? Because, as a side benefit of writing test cases, many times I am able to find problems in the requirements or design of an application. And, because the process of developing test cases makes me completely think through the operation of the application.
Diferences Between System Testing and User Acceptance Testing?
Answer1:
system testing: The process of testing an integrated system to verify that it meets specified requirements. acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
First, I don’t classify the incidents or defects regarding the phase the software development process or testing, I prefer classify them regarding their type, e. g. Requeriments, features and functionality, structural bugs, data, integration, etc The value of categorising faults is that it helps us to focus our testing effort where it is most important and we should have distinct test activietis that adrress the problems of poor requerimients, structure, etc.
You don’t do User Acceptance Test only because the software is delivered! Take care about the concepts of testing!
Answer2:
In my company we do not perform user acceptance testing, our clients do. Once our system testing is done (and other validation activities are finished) the software is ready to ship. Therefore any bug found in user acceptance testing would be issued a tracking number and taken care of in the next release. It would not be counted as a part of the system test.
Answer3:
This is what i feel is user acceptance testing, i hope u find it useful. Definition:
User Acceptance testing is a formal testing conducted to determine whether a software satisfies it's acceptance criteria and to enable the buyer to determine whether to accept the system.
Objective:
User Acceptance testing is designed to determine whether the software is fit for the user to use. And also to determine if the software fits into user's business processes and meets his/her needs.
Entry Criteria:
End of development process and after the software has passed all the tests to determine whether it meets all the predetermined functionality, performance and other quality criteria.
Exit Criteria:
After the verification that the docs delivered are adequate and consistent with the executable system. Software system meets all the requirements of the customer
Deliverables:
User Acceptance Test Plan
User Acceptance Testcases
User guides/docs
User Acceptance Testreports
Answer4:
System Testing: Done by QA at developemnt end.It is done after intergration is complete and all integration P1/P2/P3 bugs are fixed. the code is freezed. No more code changes are taken. Then All the requirements are tested and all the intergration bugs are verified.
UAT: Done by QA(trained like end users ). All the requiement are tested and also whole system is verified and validated.
What is the difference between a test plan and a test scenario?
Difference number 1: A test plan is a document that describes the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a document that describes both typical and atypical situations that may occur in the use of an application.
Difference number 2: Test plans define the scope, approach, resources, and schedule of the intended testing activities, while test procedures define test conditions, data to be used for testing, and expected results, including database updates, file outputs, and report results.
Difference number 3: A test plan is a description of the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a description of test cases that ensure that a business process flow, applicable to the customer, is tested from end to end.
Can you give me an example on reliability testing?
For example, our products are defibrillators. From direct contact with customers during the requirements gathering phase, our sales team learns that a large hospital wants to purchase defibrillators with the assurance that 99 out of every 100 shocks will be delivered properly.
In this example, the fact that our defibrillator is able to run for 250 hours without any failure in order to demonstrate the reliability, is irrelevant to these customers. In order to test for reliability we need to translate terminology that is meaningful to the customers into equivalent delivery units, such as the number of shocks. Therefore we describe the customer needs in a quantifiable manner, using the customer’s terminology. For example, our quantified reliability testing goal becomes as follows: Our defibrillator will be considered sufficiently reliable if 10 (or fewer) failures occur from 1,000 shocks.
Then, for example, we use a test / analyze / fix technique, and couple reliability testing with the removal of errors. When we identify a failed delivery of a shock, we send the software back to the developers, for repair. The developers build a new version of the software, and then we deliver another 1,000 shocks (into dummy resistor loads). We track failure intensity (i.e. failures per 1,000 shocks) in order to guide our reliability testing, and to determine the feasibility of the software release, and to determine whether the software meets our customers' reliability requirements.
Need function to find all the positions?
Ex: a string "abcd, efgh,ight" .
Want break this string based on the criteria here ever found the..
Answer1:
And return the delimited fields as a list of string? Sound like a perl split function. This could be built on one of your own containing:
[ ] //knocked this together in a few min. I am sure there is a much more efficent way of doing things
[ ] //but this is with the cobling together of several built in functions
[-] LIST OF STRING Split(STRING sDelim, STRING sData)
[ ] LIST OF STRING lsReturn
[ ] STRING sSegment
[-] while MatchStr("*{sDelim}*", sData)
[ ] sSegment = GetField(sData, sDelim, 1)
[ ] ListAppend(lsReturn, Trim(sSegment))
[ ] //crude chunking:
[ ] sSegment += ","
[ ] sData = GetField(sData, sSegment, 2)
[-] if Len(sData) > 0
[ ] ListAppend(lsReturn, Trim(sData))
[ ] return lsReturn
Answer2:
You could use something like this.... hope I am understanding the problem
[+] testcase T1()
[ ] string sTest = "hello, there I am happy"
[ ] string sTest1 = (GetField (sTest, ",", 2))
[ ] Print(sTest1)
[ ]
[ ] This Prints "there I am happy"
[ ] GetField(sTest,","1)) would Print hello, etc....
Answer3:
Below is the function which return all fields (list of String).
[+] LIST OF STRING ConvertToList (STRING sStr, STRING sDelim)
[ ] INTEGER iIndex= 1
[ ] LIST OF STRING lsStr
[ ] STRING sToken = GetField (sStr, sDelim, iIndex)
[ ]
[+] if (iIndex == 1 && sToken == "")
[ ] iIndex = iIndex + 1
[ ] sToken = GetField (sStr, sDelim, iIndex)
[ ]
[+] while (sToken != "")
[ ] ListAppend (lsStr, sToken)
[ ] iIndex = iIndex+1
[ ] sToken = GetField (sStr, sDelim, iIndex)
[ ] return lsStr
What is the difference between monkey testing and smoke testing?
Difference number 1: Monkey testing is random testing, and smoke testing is a nonrandom testing. Smoke testing is nonrandom testing that deliberately exercises the entire system from end to end, with the the goal of exposing any major problems.
Difference number 2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually.
Difference number 3: Monkey testing is performed by "monkeys", while smoke testing is performed by skilled testers.
Difference number 4: "Smart monkeys" are valuable for load and stress testing, but not very valuable for smoke testing, because they are too expensive for smoke testing.
Difference number 5: "Dumb monkeys" are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few bugs during smoke testing.
Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is stable enough to be tested more thoroughly.
Difference number 7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from something simple to something more thorough.
Difference number 8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of hours.
It's a good thing to share test cases with customers
That's generally a good thing, but the question is why do they want to see them?
Potential problems are that they may be considering changing outsourcing firms and want to use the test cases elsewhere. If that can be prevented, please do so.
Another problem is that they want to micro manage your testing efforts. It's one thing to audit your work to prove to themselves that you're doing a good job, it's an entirely different matter if they intend to tell you that you don't have enough test coverage on the activity of module foo and far too much coverage on module bar, please correct it.
Another issue may be that they are seeking litigation and they need proof that you were negligent in some area of testing.
It's never a bad thing to have your customer wanting to be involved, unless you're a large company and this is a small (in terms of sales) customer.
What are your concerns about this? Can you give more information on your situation and the customer's?
0 comments:
Post a Comment