How to trace fixed bug in test case?Difference between reliability testing and load testing,volume testing and load testing , daily builds smoke tests

Tell me about daily builds and smoke tests.

The idea is to build the product every day, and test it every day. The software development process at Microsoft and many other software companies requires daily builds and smoke tests. According to their process, every day, every single file has to be compiled, linked, and combined into an executable program; and then the program has to be "smoke tested".
Smoke testing is a relatively simple check to see whether the product "smokes" when it runs.
Please note that you should add revisions to the build only when it makes sense to do so. You should to establish a build group, and build daily; set your own standard for what constitutes "breaking the build", and create a penalty for breaking the build, and check for broken builds every day.
In addition to the daily builds, you should smoke test the builds, and smoke test them Daily. You should make the smoke test evolve, as the system evolves. You should build and smoke test Daily, even when the project is under pressure.
Think about the many benefits of this process! The process of daily builds and smoke tests minimizes the integration risk, reduces the risk of low quality, supports easier defect diagnosis, improves morale, enforces discipline, and keeps pressure cooker projects on track. If you build and smoke test DAILY, success will come, even when you're working on large projects!

How to Read data from the Telnet session?

Declared
[+] window DialogBox Putty
[ ] tag "* - PuTTY"
[ ]
[ ] // Capture the screen contents and return as a list of strings
[+] LIST OF STRING getScreenContents()
[ ]
[ ] LIST OF STRING ClipboardContents
[ ]
[ ] // open the system menu and select copy all to clipboard menu command
[ ] this.TypeKeys("o")
[ ]
[ ] // get the clipboard contents
[ ]
[ ] ClipboardContents = Clipboard.getText()
[ ] return ClipboardContents

I then created a function that searches the screen contents for the required data to validate. This works fine for me. Here it is to study. Hope it may help
void CheckOutPut(string sErrorMessage)
[ ]Putty.setActive ()
[ ]
[ ] // Capture screen contents
[ ] lsScreenContents = Putty.GetScreenContents ()
[ ] Sleep(1)
[ ] // Trim Screen Contents
[ ] lsScreenContents = TrimScreenContents (lsScreenContents)
[ ] Sleep(1)
[-] if (sBatchSuccess == "Yes")
[-] if (ListFind (lsScreenContents, "BUILD FAILED"))
[ ] LogError("Process should not have failed.")
[-] if (ListFind (lsScreenContents, "BUILD SUCCESSFUL"))
[ ] Print("Successful")
[ ] break
[ ] // Check to see if launcher has finished
[-] else
[-] if (ListFind (lsScreenContents, "BUILD FAILED") == 0)
[ ] LogError("Error should have failed.")
[ ] break
[-] else
[ ] // Check for Date Conversion Error
[-] if (ListFind (lsScreenContents, sErrorMessage) == 0)
[ ] LogError ("Error handle")
[ ] Print("Expected - {sErrorMessage}")
[ ] ListPrint(lsScreenContents)
[ ] break
[-] else
[ ] break
[ ]
[ ] // Raise exception if kPlatform not equal to windows or putty
[+] default
[ ] raise 1, "Unable to run console: - Please specify setting"
[ ]

What is the difference between system testing and integration testing?

"System testing" is a high level testing, and "integration testing" is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa.
For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components. For system testing, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.
The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life.

How to trace fixed bug in test case?

Answer1:
The fixed defects can be tracked in the defect tracking tool. I think it is out of scope of a test case to maintain this.
The defect tracking tool should indicate that the problem has been fixed, and the associated test case now has a passing result.
If and when you report test results for this test cycle, you should provide this sort of information; i.e., test failed, problem report written, problem fixed, test passed, etc...

Answer2:
As using Jira (like Bugzilla) to manage your testcases as well as your bugs. When a test discovers a bug, you will link the two, marking the test as "in work" and "waiting for bug X". Now, when the developer resolves the bug and you retest it, you see the link to the tescase and retest/close it.

What is the difference between performance testing and load testing?

Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

After the migration done, how to test the application (Frontend hasn't changed just the database changed)

Answer1:
You can concentrate only on those testcases which involve DB transactions like insert,update,delete etc.

Answer2:
Focus on the database tests, but it's important to analyze the differences between the two schemas. You can't just focus on the front end. Also, be careful to look for shortcuts that the DBAs may be taking with the schema.

What is the difference between reliability testing and load testing?

The term, reliability testing, is often used synonymously with load testing. Load testing is a blanket term that is used in many different ways across the professional software testing community. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

Some general guidelines on what to test for web based applications.

1. Navigation: Users move to and from pages, click on links, click on images (thumbnails), etc. Navigation in a WebSite shoud be quick and error free.
2. Server Response. How fast the WebSite host responds influences whether a user (i.e. someone on the browser) moves on or gives up.
3. Interaction & Feedback. For passive, content-only sites the only real quality issue is availability. For a WebSite that interacts with the user, the big factor is how fast and how reliable that interaction is.
4. Concurrent Users. Do multiple users interact on a WebSite? Can they get in each others' way? While WebSites often resemble client/server structures, with multiple users at multiple locations a WebSite can be much different, and much more complex, than complex applications
5. Browser Independent. Tests should be realistic, but not be dependent on a particular browser
6. No Buffering, Caching. Local caching and buffering -- often a way to improve apparent performance -- should be disabled so that timed experiments are a true measure of the Browser response time.
7. Fonts and Preferences. Most browsers support a wide range of fonts and presentation preferences
8. Object Mode. Edit fields, push buttons, radio buttons, check boxes, etc. All should be treatable in object mode, i.e. independent of the fonts and preferences.
9. Page Consistency. Is the entire page identical with a prior version? Are key parts of the text the same or different?
10. Table, Form Consistency. Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the "right place".
11. Page Relationships. Are all of the links on a page the same as they were before? Are there new or missing links? Are there any broken links?
12. Performance Consistency, Response Times. Is the response time for a user action the same as it was (within a range)?
13. Image File Size. File size should be closely examined when selecting or creating images for your site. This is particularly important when your site is directed to an audience that may not enjoy the high-bandwidth and fast connection speeds available
14. Avoid the use of HTML "frames". The problems with frames-based site designs are well documented, including; the inability to bookmark subcategories of the site, difficulty in printing frame cell content, disabling the Web browser's "back" button as a navigation aid.
15. Security. Ensure data is encrypted before transferring sensitive information, wherever required. Test user authentication thoroughly. Ensure all backdoors and test logins are disabled before going live with the web application.
16. Sessions. Ensure session validity is maintained throughout a web transasction, for e.g. filling a webform that spans over several webpages. Forms should retain information when using the 'back' button wherever required for user convenience. At the same time, forms need to be reset wherever security is an issue, like the password fields, etc.
17. Error handiling. Web navigation should be quick and error free. However, sometimes errors cannot be avoided. It would be a good idea to have a standard error page that handles all errors. This is cleaner than displaying the 404 page. After displaying the error page, users can then be automatically redirected to the home page or any other relevant page. At this same time, this error can also be logged and a message can be sent to notify the admin.

What is the difference between volume testing and load testing?

The term, volume testing, is often used synonymously with load testing. Load testing is a blanket term that is used in many different ways across the professional software testing community.

What types of testing can you tell me about?

Each of the followings represents a different type of testing: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.

0 comments:

Post a Comment