Tuesday, June 7, 2016

Approach to writing Manual Test cases/Scripts


I am in the process of writing Manual Test Scripts for a system in System Testing Phase. A comprehensive set of Use Cases have been prepared by the Business Analysts on team. From these Test Scenarios and Test Conditions have been derived.
The system is a multi component request processing system where a user at one end can request a service from a devices at at the other end. The system developed processes these requests (authenticates user, authorises requests, routes requests to correct device, etc...). There are also different data variables involved - different Service users, different device alerts, different device statuses, etc... These will also need to be considered when testing.
So what is the best way of writing the scripts? Each use case has multiple steps in the flow. Do we write a test for each step in the flow or for for the whole use case? I wrote tests for each step in the flow, but in the end it looked very repetitive. I could have written one test for the whole use case and just change the test data (but is this really one test?)An example use case would be 'Process request'. A sample step in the flow is 'Acknowledge request from user' and another would be ' Log request'. Should a separate test be written for each of these parts of the flow? Or one test for 'Process Request'
I dont know If I'm looking in the wrong places, but I usually find lots of articles, docs on test planning, test preparation,etc.. but little on writing effective manual test cases/scripts.
Thanks
shareedit

3 Answers

In my experience with documenting system tests, I've found a multi-layered approach works. I really like Microsoft Test Manager for this because of two things: the ability to define input parameters for manual tests and the concept of shared test steps which can be used by any test case.
You don't mention if you're using a test case tool, word, excel or some other method of documentation, but you can adapt what I'm describing to any tool.
  • Work top-down: I start at the use case or acceptance criteria if there are no test scenarios or test conditions defined, and treat each scenario as a high level test in a larger test suite.
  • Identify and extract repetitions: In any moderately complex system there will be a lot of repeated actions, either with identical data or with near-identical data. I aim to extract anything that is going to happen more than once into its own unit (either a test that gets linked as a prerequisite to another test or as shared test steps) and nest as deep as I have to. For instance, with login credentials, I'll define a test or shared steps for "successful admin login" which contains admin login credentials, a link to "successful login" shared steps, and the expected view or data returned (this may be in the generic successful login). The "successful login" contains even more generic "login" shared steps with no indication of what happens on completion - that's handled by the success/fail test steps. It's effectively refactoring for DRY in your manual test cases.
  • Define data separately: I prefer to have my test data as a separate source than the test cases, and reference it. The precise form of the data will vary, but I've used database, zipped files containing data sets, text files, XML... Whatever works. The key thing here is that it's something I can reference from any test case and reuse infinitely.
  • "Just in time" detailing: Generally speaking, I don't go into details in my test cases until I need to. When I'm initially creating them, I'll keep it basic - a test case might be "logged in admin requests A from device Z, all OK" (this usually corresponds closely to any identified scenarios). Then I'll break this down to a series of smaller tests: "1. log in as admin; 2. send request A. 3. check response from device Z". From that, I'll reference the more detailed repeated items - so for instance I'll have a reference to the exact structure of the request being sent. I rarely if ever go as far as "click this button" type test steps.
  • Leave room for exploration: Rather than script manual tests in extreme detail, I prefer to work at a level that allows the tester to choose different ways of entering data or performing actions. I'll usually include mention of critical actions to use (such as "check each defined keyboard shortcut triggers the defined action in document X page Y") - this works better when testers are familiar with the application and/or are experienced testers.
  • Steel thread first: This should go without saying, but still... My priority is always to start with the functionality defined in the use cases, and make sure that works under the conditions which are expected and/or defined. I don't even consider anything outside that boundary until I know that much is working. Often I don't detail other tests until after I've got the steel threat sorted (because there is rarely time to test as completely as I'd like and if I had a dollar for every time I've been unable to test anything outside steel thread I'd have a whole lot more money than I do).
  • Remember the goal: The reason for your tests is to provide the business people running the project with information about the state of the project. Testers aren't the gatekeepers - we don't have the perspective. Your tests should be designed to give as much information as possible about the project's state in the shortest possible time. In order to do this, you'll need to identify the most critical scenarios (which can be interesting when you're in an environment where everything is critical - been there...) and prioritize your tests accordingly.
shareedit
   
Great answer Kate! – Justin Jan 8 '14 at 20:03
Full disclosure: I work for Rainforest QA. Because of the nature of our platform we have a strict but relatively simple technique for test writing to ensure that our customers get reliable and consistent results.
That said, our approach to test writing works well for manual testing even if you don't use the Rainforest platform. The key points of our test writing philosophy are:
1) Keep the scope of each test narrow. We encourage our users to limit each test case to a single, discrete process. For example, the scope of one test might be to create a new account, and another would be to login to an existing account. By keeping the scope for individual tests narrow, it's easier to tell if that particular interaction is working correctly, without getting bogged down with too much extraneous information.
2) Design tests for deterministic results. Each test is broken down into pairs of actions and questions. For example:
Action: Click the "Create an Account" button on the top left of the page.
Question: Did the "create an account" popup box appear?
Every question in the action-question pair must be answered with a yes or a no. This binary format is designed to reduce any fuzziness about the results of the test. This takes the burden of interpreting unclear results off the tester and helps to eliminate miscommunication about whether the test passed or not.
3) Make your tests easy for anyone to understand. When you work closely with a product, it can be easy to forget that your end users don't have the inside knowledge that you have. This is also especially important if you have a remote team of testers or use a crowdsourced platform like Rainforest to run your tests. But even if you don't, it's a good practice to keep because it forces you to put yourself in the shoes of your users.
You can check out more of our testwriting approach in this blogpost:https://www.rainforestqa.com/blog/2016-04-11-how-to-write-better-qa-tests

Saturday, May 7, 2016

Top 40 QA Interview Questions

1) What is the difference between the QA and software testing?
The role of QA (Quality Assurance) is to monitor the quality of the process to produce a quality of a product. While the software testing, is the process of ensuring the final product and check the functionality of final product and to see whether the final product meets the user’s requirement.
2) What is Testware?
Testware is the subset of software, which helps in performing the testing of application.  It is a term given to the combination of software application and utilities which is required for testing a software package.
3) What is the difference between build and release?
Build: It is a number given to Installable software that is given to testing team by the development team.
Release: It is a number given to Installable software that is handed over to customer by the tester or developer.
4) What are the automation challenges that QA team faces while testing?
  • Exploitation of automation tool
  • Frequency of use of test case
  • Reusability of Automation script
  • Adaptability of test case for automation
5) What is bug leakage and bug release?
Bug release is when software or an application is handed over to the testing team knowing that the defect is present in a release.  During this the priority and severity of bug is low, as bug can be removed before the final handover.
Bug leakage is something, when the bug is discovered by the end users or customer, and missed by the testing team to detect, while testing the software.
QA Interview Questions
6) What is data driven testing?
Data driven testing is an automation testing part, which tests the output or input values. These values are read directly from the data files. The data files may include csv files, excel files, data pools and many more. It is performed when the values are changing by the time.
7) Explain the steps for Bug Cycle?
  • Once the bug is identified by the tester, it is assigned to the development manager in open status
  • If the bug is a valid defect the development team will fix it and if it is not a valid defect, the defect will be ignored and marked as rejected
  • The next step will be to check whether it is in scope, if it is happen so that, the bug is not the part of the current release then the defects are postponed
  • If the defect or bug is raised earlier then the tester will assigned a DUPLICATE status
  • When bug is assigned to developer to fix, it will be given a IN-PROGRESS status
  • Once the defect is repaired, the status will changed to FIXED at the end the tester will give CLOSED status if it passes the final test.
8) What does the test strategy include?
The test strategy includes introduction, resource, scope and schedule for test activities, test tools, test priorities, test planning and the types of test that has to be performed.
9) Mention the different types of software testing?
  • Unit testing
  • Integration testing and regression testing
  • Shakeout testing
  • Smoke testing
  • Functional testing
  • Performance testing
  • White box and Black box testing
  • Alpha and Beta testing
  • Load testing and stress testing
  • System testing
10) What is branch testing and what is boundary testing?
The testing of all the branches of the application, which is tested once, is known as branch testing.  While the testing, which is focused on the limit conditions of the software is known as boundary testing.
11) What are the contents in test plans and test cases?
  • Testing objectives
  • Testing scope
  • Testing the frame
  • The environment
  • Reason for testing
  • The criteria for entrance and exit
  • Deliverables
  • Risk factors
12) What is Agile testing and what is the importance of Agile testing?
 Agile testing is software testing, which involves the testing of the software from the customer point of view.  The importance of this testing is that, unlike normal testing process, this testing does not wait for development team to complete the coding first and then doing testing. The coding and testing both goes simultaneously.  It requires continuous customer interaction.
It works on SDLC ( Systems Development Life Cycle) methodologies, it means that the task is divided into different segments and compiled at the end of the task.
13) What is Test case?
Test case is a specific term that is used to test a specific element.  It has information of test steps, prerequisites, test environment and outputs.
14) What is the strategy for Automation Test Plan?
  • The strategy for Automation Test Plan
  • Preparation of Automation Test Plan
  • Recording the scenario
  • Error handler incorporation
  • Script enhancement by inserting check points and looping constructs
  • Debugging the script and fixing the issues
  • Rerunning the script
  • Reporting the result
15) What is quality audit?
The systematic and independent examination for determining the quality of activities is known as quality audit.  It allows the cross check for the planned arrangements, whether they are properly implemented or not.
16) How does a server or client environment affect software testing?
As the dependencies on the clients are more, the client or server applications are complex.
The testing needs are extensive as servers, communications and hardware are interdependent. Integration and system testing is also for a limited period of time.
17) What are the tools used by a tester while testing?
  • Selenium
  • Firebug
  • OpenSTA
  • WinSCP
  • YSlow for FireBug
  • Web Developer toolbar for firebox
18) Explain stress testing, load testing and volume testing?
  • Load Testing: Testing an application under heavy but expected load is known as Load Testing.  Here, the load refers to the large volume of users, messages, requests, data, etc.
  • Stress Testing: When the load placed on the system is raised or accelerated beyond the normal range then it is known as Stress Testing.
  • Volume Testing:  The process of checking the system, whether the system can handle the required amounts of data, user requests, etc. is known as Volume Testing.
19) What are the five common solutions for software developments problems?
  • Setting up the requirements criteria, the requirements of a software should be complete, clear and agreed by all
  • The next thing is the realistic schedule like time for planning , designing, testing, fixing bugs and re-testing
  • Adequate testing, start the testing immediately after one or more modules development.
  • Use rapid prototype during design phase so that it can be easy for customers to find what to expect
  • Use of group communication tools
20) What is a ‘USE’ case and what does it include?
The document that describes, the user action and system response, for a particular functionality is known as USE case.  It includes revision history, table of contents, flow of events, cover page, special requirements, pre-conditions and post-conditions.
21) What is CRUD testing and how to test CRUD?

CRUD testing is another name for Black Box testing.  CRUD stands for Create, Read, Update and Delete.
22) What is validation and verification in software testing?
In verification, all the key aspects of software developments are taken in concern like code, specifications, requirements and document plans.  Verification is done on the basis of four things list of issues, checklist, walkthroughs and inspection meetings. Following verification, validation is done, it involves actual testing, and all the verification aspects are checked thoroughly in validation.
23) What is thread testing?
A thread testing is a top-down testing, where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
24) What is configuration management?
It is a process to control and document any changes made during the life of a project.  Release control, Change control and Revision control are the important aspects of configuration management.
25) What is Ad Hoc testing?
It is a testing phase where the tester tries to break the system by randomly trying the system’s functionality.  It can include negative testing as well.
26) List out the roles of software Quality Assurance engineer?
A software quality assurance engineer tasks include following things
  • Writing source code
  • Software design
  • Control of source code
  • Reviewing code
  • Change management
  • Configuration management
  • Integration of software
  • Program testing
  • Release management process
27) Explain what are test driver and test stub and why it is required?
  • The stub is called from the software component to be tested, it is used in top down approach
  • The driver calls a component to be tested, it is used in bottom up approach
  • It is required when we need to test the interface between modules X and Y and we have developed only module X. So we cannot just test module X but if there is any dummy module we can use that dummy module to test module X
  • Now module B cannot receive or send data from module A directly, so in these case we have to transmit data from one module to another module by some external features. This external feature is referred as Driver
28) Explain what is Bug triage?
A bug triage is a process to
  • Ensure bug report completeness
  • Assign and analyze the bug
  • Assigning bug to proper bug owner
  • Adjust bug severity properly
  • Set appropriate bug priority
29) List out various tools required to support testing during development of the application?
To support testing during development of application following tools can be used
  • Test Management Tools: JIRA, Quality Center etc.
  • Defect Management Tools: Test Director, Bugzilla
  • Project Management Tools: Sharepoint
  • Automation Tools: RFT, QTP, and WinRunner
30) Explain what is a cause effect graph?
A cause effect graph is a graphical representation of inputs and the associated outputs effects that can be used to design test cases.
31) Explain what is Test Metric is software testing and what information does it contains?
In software testing, Test Metric is referred to standard of test measurement. They are the statistics narrating the structure or content of a program.  It contains information like
  • Total test
  • Test run
  • Test passed
  • Test failed
  • Tests deferred
  • Test passed the first time
32) Explain what is traceability matrix?
A test matrix is used to verify the test scripts per specified requirements of test cases.
33) Explain what is the difference between Regression testing and Retesting?
Retesting is carried out to check the defects fixes, while regression testing is performed to check whether the defect fix have any impact on other functionality.
34) List out the software quality practices through the software development cycle?
Software quality practices includes
  • Review the requirements before starting the development phase
  • Code Review
  • Write comprehensive test cases
  • Session based testing
  • Risk based testing
  • Prioritize bug based on usage
  • Form a dedicated security and performance testing team
  • Run a regression cycle
  • Perform sanity tests on production
  • Simulate customer accounts on production
  • Include software QA Test Reports
35) Explain what is the rule of a “Test Driven Development”?
The rule of a Test Driven Development is to prepare test cases before writing the actual code. Which means you are actually be writing code for the tests before you write code for the application.
36) Mention what are the types of documents in QA?
The types of documents in QA are
  • Requirement Document
  • Test Metrics
  • Test cases and Test plan
  • Task distribution flow chart
  • Transaction Mix
  • User profiles
  • Test log
  • User profiles
  • Test incident report
  • Test summary report
37) Explain what should your QA documents should include?
QA testing document should include
  • List the number of defects detected as per severity level
  • Explain each requirement or business function in detail
  • Inspection reports
  • Configurations
  • Test plans and test cases
  • Bug reports
  • User manuals
  • Prepare separate reports for managers and users
38) Explain what is MR and what information does MR consists of?
MR stands for Modification Request also referred as Defect report, it is written for reporting errors/problems/suggestions in the software.
39) What does the software QA document should include?
Software QA document should include
40) Mention how validation activities should be conducted?
Validation activities should be conducted by following techniques
  • Hire third party independent verification and validation
  • Assign internal staff members that are not involved in validation and verification activities
  • Independent evaluation

50 Most Popular Quality Center Interview Questions and Answers

Quality Center Interview Questions & Answers

Q#1. What is Quality Center?
Ans. Quality Center is a product of HP known as HP QC or Quality Center or HP ALM (application Lifecycle Management) tool. It is a web based test management tool which supports various phases of software development life cycle. It helps in improving application quality with more effective implementation of a project and it is cost efficient too.
Q#2. What are the benefits of using Quality Center?
Ans. Quality Center is simple and one of the best test management tool. Its benefits are:
  1. It can be accessed through an IE browser.
  2. Project database of a test can be maintained by QC.
  3. It can be integrated with HP testing tools like QTP and Load Runner. It is also compatible with third party tools.
  4. It helps in effectively executing test sets, collecting results and analyzing data.
  5. It helps in monitoring defects closely.
  6. QC can be linked to an email system which provides an easy way to share defect tracking information.
  7. It can be used for creating reports and graphs which helps in analyzing test data.
  8. It supports virtual environments like Citrix XenApp 6.0 and VMware ESX 5.0.
Q#3. What is the first & latest version of Quality Center?
Ans.
Quality Center 8.0 is the first version and Quality Center or ALM 12.0 is the latest version.
Q#4. Explain the modules of Quality Center?
Ans. The Quality Center modules are:
  1. Release Module: Allows us creating a project release. Each release can have multiple cycles.
  2. Requirement Module: Allows us in managing requirements like what we are testing, what are the requirement topics and items and what are the analyzing requirements.
  3. Test Plan:  Allows us to write test cases for the requirements in a hierarchical tree-structure.
  4. Test Resources: Allows us in managing test resources. Tests resources can be associated with tests.
  5. Test lab: Allows us to run tests and analyze the results.
  6. Defect Module: Allows us to log all the failed test cases results.
  7. Dashboard: Allows us to create graphs and reports.
Q#5. How many built in tables does Quality Center have?
Ans. There are six built in tables:
  1. Test Table
  2. Test Step Table
  3. Test Set Table
  4. Run Table
  5. Defect Table
  6. Requirement Table
Q#6. How many types of reports and graphs are there in Quality Center?
Ans. Reports and graphs can be generated any time and in each and every phase with QC module during the testing process by using default or customize settings like requirement, test plan, test lab or defect module. We can also get summary and progress reports.
Q#7. Which types of database are used in Quality Center?
Ans. When a Quality Center Project is created we have to store and manage the data generated and collected by Quality Center. Each and every project is supported by a database that is used to store project information. The following database applications are used to store and manage Quality Center information:
  • Oracle 9.2.0.6 Standard/Enterprise Edition
  • Oracle 10.2.0.3
  • Microsoft SQL Server 2005 (SP2).
Q#8. How does u control the access to a QC project?
Ans. We need to specify the users and the privileges for each user.
Q#9. How many types of tabs are there in Quality Center?
Ans. Following types of tabs are available:
1. Requirement: Helps in tracking the customer requirements.
2. Test plan: Helps in designing the test cases and to store the test scripts.
3. Test lab: Helps in executing the test cases and track the results.
4. Defect: Helps in logging a defect and to track the logged defects.
Q#10. What are the different edition for HP QC or HP ALM?
Ans. The different edition of HP QC/ALM includes:
  • HP ALM essentials:It is used by the corporates that need the basic features for supporting their entire software life cycle.
  • HP QC enterprise edition:It is used by corporates more commonly who use ALM more testing purposes, also provides integration with UFT
  • HP ALM performance center edition:It is best suitable for organizations who would like to use HP ALM to drive HP-Loadrunner scripts. It helps the users to manage, maintain, execute, schedule and monitor performance tests.
Q#11. What is the difference between Test Director and Quality Center?
Ans. Quality center is the advanced version of Test Director. It has more features than Test Director.
Q#12. Do we have programming interface in Quality Center?
Ans. No, we don’t have programming interface in Quality Center.
Q#13. What is the difference between Quality Center and Bugzilla?
Ans. Quality Center is a test management tool which supports various phases of software development life cycle whereas BugZilla is Defect Management tool only.

Q#14. What is meant by test lab in Quality Center?
Ans. Test lab is a functionality of Quality center using which we execute tests. We create test trees and add tests to those trees and placed them under test plan in a project. These tests then needs to imported in the Test lab module where Quality center executes them.
Q#15. How can we import test cases from Excel to Quality Center?
Ans. To import test cases from Excel to Quality Center:
  1. We need to Install and Configure the Microsoft Excel Add-In for Quality Center.
  2. Need to Map the Columns present in the Excel with Columns in Quality Center.
  3. Export the data from Excel using “Export to Quality Center option” in Excel.
  4. Check for the errors if any.
Q#16. How can we export the file from Quality Center to Excel/Word?
Ans. A file can be exported from any of the following tab in excel or word format.
  1. Requirement tab:
    1. Right click on main Requirement
    2. Click on export
    3. Save as word, excel or other template.
  1. Test plan tab:
    1. Select a test script.
    2. Click on the design steps tab.
    3. Right click anywhere in the window
    4. Click on export and save as.Note: Only individual test can be exported. No parent child export is possible.
  1. Test lab tab:
    1. Select a child group.
    2. Click on execution grid.
    3. Right click and save in excel or other format.
  1. Defects Tab:
    1. Right click anywhere in the window.
    2. Export all or selected defects.
    3. Save them in excel sheet or any other format.
Q#17. What is Business Component?
Ans. Business component is used for Business Process testing known as BPT. Business component provide script free environment for creating tests.
Q#18. How to use QTP as an automation tool in Quality Center?
Ans. Using QTP add-in in Quality Center we can use QTP as an automation tool.
Q#19. How to switch between two projects in Quality Center?
Ans. There is difference in switching between two projects in Quality Center 9.0 and above and in other versions.
QC 9.0:- Select Tools then Change Projects and Select Project.
Other versions: Log-off and log-in again.
Q#20. What is Coverage status?
Ans. Percentage of testing covered at a given time is known as Coverage status. It helps in tracking project status.
Q#21. Explain the architecture of HP-ALM?
Ans. HP ALM has following components:
  1. HP ALM client.
  2. ALM server/Application server.
  3. Database servers.
Q#22. What are the components of Dashboard Analysis?
Ans. The dashboard analysis has two components.
  • Analysis View which contain analysis tree.
  • Dashboard View which contains dashboard tree. 
Q#23. What types of requirements can be added to test cases in Quality Center?
Ans. There are two types of requirements can be added to test cases in Quality Center:
Parent Requirements which covers high level functions of the requirements
Child Requirements which covers low level functions of the requirements.

------------
Q#24. What is Sprinter in HP-ALM?
Ans. Sprinter provides automated environment to execute various manual testing tasks. It offers advanced tools which helps in easy execution of testing tasks.
Q#25. How to use Quality Center in real time project?
Ans. Following are the steps to use Quality Center in real time project.
  1. Complete the preparation of test cases.
  2. Export the test cases into Quality Center and Load them in the test plan module
  3. Move the test cases from test plan tab to the test lab module.
  4. Execute the test cases and check for the results
  5. If we got any defects and raise the defects in the defect module.
Q#26. How to map the requirements with test cases in Quality Center?
Ans. We can map the requirements with test cases in QC:
1. In the requirements tab select coverage view.
2. Select requirement by clicking on parent/child or grandchild.
3. On right hand side another window will appear. It has two tabs:
(a) Tests coverage
(b) Details
Test coverage tab will be selected by default or you click on it.
4. Click on select tests button a new window will appear on right hand side and you will see a list of all tests. You can select any test case you want to map with your requirements.
Q#27. What is a Table in QC project?
Ans. A table is a part of database which stores records of information about the test plan.
Q#28. What does a live analyses graph displays in Quality Center?
Ans. Quality Center live analyses graph provides a visual overview of all tests within a folder in test plan tree.
Q#29. What are the phases of test management with Quality Center in order?
Ans. There are 5 phases: Specify releases, Specify requirements, Plan tests, Execute tests, Track defects.
Q#30. What are the interfaces of Quality center?
Ans. The interfaces of QC are:
  • Site Admin
  • Quality Center
Q#31. How does the records appear in Quality Center?
Ans. By default in the order in which they were added.
Q#32. How can we save the tests executed in Test Lab?
Ans. They saved automatically when the users clicks on “END RUN” in the Test Lab.
Q#33. How do you run reports from Quality Center?
Ans. To run reports from QC:
  1. Open the Quality Center project
  2. It displays the requirements modules
  3. Choose report:  Analysis > reports > standard requirements report
Q#34. What is use of Test Instance?
Ans. Test instance is required to run the test case in the lab. We can’t directly test case in the lab, instead we need to run test instance of that test case.
Q#35. What is Risk Category?
Ans. We determine the risk category for each assessment requirement under the analysis requirement, It has two factors:
  1. Business Criticality
  2. Failure Probability.
Q#36.What is assessment requirement?
Ans. Assessment requirement represents requirements that are children of analysis requirement and at lower level in tree hierarchy.
Q#37. What are roles and responsibilities of QC admin?
Ans. Roles & responsibilities of QC Admin are:
  1. Project Creation
  2. Managing users and their authentication.
  3. Performance monitoring.
  4. Data backup etc.
Q#38. How do you find duplicates bugs in the Quality Center?
Ans. We can find duplicate bugs in the defect manager tab using “Find Similar Defects” button. We need to enter defect description in brief and it shows similar defects.
Q#39. Does Quality Center supports UNIX Operating environment?
Ans. Yes, Quality center comes with two kinds of licenses:
  1. Quality Center for Windows.
  2. Quality Center for UNIX.
Q#40. Why to use Filters? How you define it?
Ans. To see the records that meet the specific criteria that we define we use filters. We can define multiple items as Filter.
Q#41. What the Users group determines?
Ans. The users group determines the privileges that the user has within a project.
Q#42. What is Unattached Folder in Test Plan?
Ans. When we delete a folder or test from the test plan tree there are two ways. We can delete only folder or we can delete that folder, its sub folder and test also. When we delete only folder, all the tests under it moved to the unattached folder in the test plan tree.
Q#43. What is Matching Defects?
Ans. Matching Defects helps us to find and eliminate duplicate or similar defects in project. There are two methods to search of similar defects.
  • Finding similar Defects which compare a selected defect with all other existing defects in project.
  • Finding similar Text which compares a specific test string against all other existing defects in project.
Q#44. What is Defect Tracking?
Ans. Defect Tracking is a method of finding and removing application defects. We can add or detect defects to project in any stage of application management process.
Q#45. Is ‘Not covered’ and ‘Not run’ status are same?
Ans. No, there is difference between ‘Not Covered’ and ‘Not Run’ status.
Not Covered status means all those requirements for which the test cases are not written and Not Run status means all those requirements for which test cases are written but are not run.
Q#46. Explain Version Control?
Ans. To keep track of changes made to entities in the project we use version control. We can create QC entities and also keep previous version of those entities in requirements to track the changes.
Q#47. What is test set notification, and when we need it?
Ans. To inform any specific user we use test set notification if case of any failure.
Q#48. What is the need of Host Manager?
Ans. Host manager helps to run test on a host connected to our network. It shows the list of available host for test execution and also organizes them into groups for a specific project.
Q#49. Explain Linking Defect to test in Quality Control?
Ans. Defects can be linked to test in the defect grid. It helps to run the tests based on the status of the defect. Defects can be linked to other entities as well such as requirements. Linking can be direct or indirect. If the defect link is with entity then QC adds a direct link and if the link is with run step then QC adds an indirect link to its run, test instance, test set and test.
Q#50. What is the default database in Quality Center?
Ans. SQL Server is the default database in Quality Center.
That’s all about HP Quality Center Interview Questions and Answers. Prepare these questions multiple times and I am sure you will find it easy to learn this tool as well as clear the interview easily.
All the best!

Sunday, April 17, 2016

Database(Data) Testing Tutorial with Sample TestCases

The GUI is in most cases given the most emphasis by the respective test managers as well as the development team members since the Graphical User Interface happens to be the most visible part of the application. However what is also important is to validate the information that can be considered as the heart of the application aka DATABASE.
Let us consider a Banking application whereby a user makes transactions. Now from database testing viewpoint following things are important:

  1. The application stores the transaction information in the application database and displays them correctly to the user.
  2. No information is lost in the process.
  3. No partially performed or aborted operation information is saved by the application.
  4. No unauthorized individual is allowed to access the users information.
To ensure all these above objectives, we need to use data validation or data testing.
In this tutorial, we will study:

User-Interface testing
Database or Data testing
This type of testing is also known as Graphical User Interface testing or Front-end Testing.
This type of testing is also known as Back-end Testing or data testing.
This type of testing chiefly deals with all the testable items that are open to the user for viewership and interaction like Forms, Presentation, Graphs, Menus, and Reports, etc. (created through VB, VB.net, VC++, Delphi - Frontend Tools )
This type of testing chiefly deals with all the testable items that are generally hidden from the user for viewership. These include internal process and storage like Assembly, DBMS like Oracle, SQL Server, MYSQL, etc.
This type of testing include validating the
text boxes,
select dropdowns,
calendars and buttons,
navigation from one page to another,
display of images as well as
Look and feel of the overall application.
This type of testing involves validating
the schema,
database tables,
columns ,
keys and indexes,
stored procedures,
triggers ,
database server validations,
validating data duplication,
The tester must be thoroughly knowledgeable about the business requirements as well as the usage of the development tools and the usage of automationframework and tools.
The tester in order to be able to perform back-end testing must have a strong background in the database server and Structured Query Language concepts.

Types of database testing

The 3 types of Database Testing are
  1. Structural Testing
  2. Functional Testing
  3. Non-functional Testing
Lets look into each type and its sub-types one by one.

Structural database testing

The structural data testing involves the validation of all those elements inside the data repository that are used primarily for storage of data and which are not allowed to be directly manipulated by the end users. The validation of the database servers is also a very important consideration in these types of testing. The successful completion of this phase by the testers involves mastery in SQL queries.

Schema testing

The chief aspect of schema testing is to ensure that the schema mapping between the front end and back end are similar. Thus, we may also refer to schema testing as mapping testing.
Let us discuss most important checkpoints for schema testing.
  1. Validation of the various schema formats associated with the databases. Many times the mapping format of the table may not be compatible with the mapping format present in the user interface level of the application.
  2. There is the need for verification in the case unmapped tables/views/columns.
  3. There is also a need to verify whether heterogeneous databases in an environment are consistent with the overall application mapping.
Let us also look at some of the interesting tools for validating database schemas.
  • DBUnit that is integrated with Ant is a very suitable for mapping testing.
  • SQL Server allows the testers to be able to check and to query the schema of the database by writing simple queries and not through code.
For example, if the developers want to change a table structure or delete it, the tester would want to ensure that all the Stored Procedures and Views that use that table are compatible with the particular change. Another example could be that if the testers want to check for schema changes between 2 databases, they can do that by using simple queries.

Database table, column testing

Let us look into various checks for database and column testing.
  1. Whether the mapping of the database fields and columns in the back end is compatible with those mappings in the front end.
  2. Validation of the length and naming convention of the database fields and columns as specified by the requirements.
  3. Validation of the presence of any unused/unmapped database tables/columns.
  4. Validation of the compatibility of the
  • data type
  • field lengths
          of the backend database columns with that of those present in the front end of the application.
  1. Whether the database fields allow the user to provide desired user inputs as required by the business requirement specification documents.
Keys and indexes testing
Important checks for keys and indexes -
  1. Check whether the required
  • Primary key
  • Foreign Key
         constraints have been created on the required tables.
  1. Check whether the references for foreign keys are valid.
  2. Check whether the data type of the primary key and the corresponding foreign keys are same in the two tables.
  3. Check whether the required naming conventions have been followed for all the keys and indexes.
  4. Check the size and length of the required fields and indexes.
  5. Whether the required
  • Clustered indexes
  • Non Clustered indexes
         have been created on the required tables as specified by the business requirements.

Stored procedures testing

The list of the most important things which are to be validated for the stored procedures.
  1. Whether the development team did adopt the required
  • coding standard conventions
  • exception and error handling
          for all the stored procedures for all the modules for the application under test.
  1. Whether the development team did cover all the conditions/loops by applying the required input data to the application under test.
  2. Whether the development team did properly apply the TRIM operations whenever data is fetched from the required tables in the Database.
  3. Whether the manual execution of the Stored Procedure provides the end user with the required result
  4. Whether the manual execution of the Stored Procedure ensures the table fields are being updated as required by the application under test.
  5. Whether the execution of the Stored Procedures enables the implicit invoking of the required triggers.
  6. Validation of the presence of any unused stored procedures.
  7. Validation forAllow Null condition which can be done at the database level.
  8. Validation of the fact that all the Stored Procedures and Functions have been successfully executed when the Database under test is blank.
  9. Validation of the overall integration of the stored procedure modules as per as the requirements of the application under test.
Some of the interesting tools for testing stored procedures are LINQ , SP Test tool etc.

Trigger testing

  1. Whether the required coding conventions have been followed during the coding phase of the Triggers.
  2. Check whether the triggers executed for the respective DML transactions have fulfilled the required conditions.
  3. Whether the trigger updates the data correctly once they have been executed.
  4. Validation of the required Update/Insert/Delete triggers functionality in the realm of the application under test.

Database server validations

  1. Check the database server configurations as specified by the business requirements.
  2. Check the authorization of the required user to perform only those levels of actions which are required by the application.
  3. Check that the database server is able to cater to the needs of maximum allowed number of user transactions as specified by the business requirement specifications.

Functional database testing

The Functional database testing as specified by the requirement specification needs to ensure most of those transactions and operations as performed by the end users are consistent with the requirement specifications.
Following are the basic conditions which need to be observed for database validations.
  • Whether the field is mandatory while allowing NULL values on that field.
  • Whether the length of each field is of sufficient size?
  • Whether all similar fields have same names across tables?
  • Whether there are any computed fields present in the Database?
This particular process is the validation of the field mappings from the end user viewpoint. In this particular scenario the tester would perform an operation at the data base level and then would navigate to the relevant user interface item to observe and validate whether the proper field validations have been carried out or not.
The vice versa condition whereby first an operation is carried out by the tester at the user interface and then the same is validated from the back end is also considered to be a valid option.

Checking data integrity and consistency

Following checks are important
  1. Whether the data is logically well organized
  2. Whether the data stored in the tables is correct and as per the business requirements.
  3. Whether there are any unnecessary data present in the application under test.
  4. Whether the data has been stored as per as the requirement with respect to data which has been updated from the user interface.
  5. Whether the TRIM operations performed on the data before inserting data into the database under test.
  6. Whether the transactions have been performed according to the business requirement specifications and whether the results are correct or not.
  7. Whether the data has been properly committed if the transaction has been successfully executed as per the business requirements.
  8. Whether the data has been roll backed successfully if the transaction has not been executed successfully by the end user.
  9. Whether the data has been roll backed at all in the condition that the transaction has not been executed successfully and multiple heterogeneous databases have been involved in the transaction in question.
  10. Whether all the transactions have been executed by using the required design procedures as specified by the system business requirements.

Login and user security

The validations of the login and user security credentials need to take into consideration the following things.
  1. Whether the application prevents the user to proceed further in the application in case of a
  • invalid username but valid password
  • valid username but invalid password.
  • invalid username and invalid password.
  • valid username and a valid password.
  1. Whether the user is allowed to perform only those specific operations which are specified by the business requirements.
  2. Whether the data secured from unauthorized access
  3. Whether there are different user roles created with different permissions
  4. Whether all the users have required levels of access on the specified Database as required by the business specifications.
  5. Check that sensitive data like passwords, credit card numbers are encrypted and not stored as plain text in database. It is a good practice to ensure all accounts should have passwords that are complex and not easily guessed.
Non-functional testing
Nonfunctional testing in the context of database testing can be categorized into various categories as required by the business requirements. These can be load testing, stress testing, security testing, usability testing, and compatibility testing and so on. The load testing as well as stress testing which can be grouped under the gamut of performance testing serves two specific purposes when it comes to the role of nonfunctional testing.
Risk quantification- Quantification of risk actually helps the stakeholders to ascertain the various system response time requirements under required levels of load. This is the original intent of any quality assurance task. We need to note that load testing does not mitigate risk directly, but through the processes of risk identification and of risk quantification, presents corrective opportunities and an impetus for remediation that will mitigate risk.
Minimum system equipment requirement- The understanding which we observe through formal testing, the minimum system configuration that will allow the system to meet the formal stated performance expectations of stakeholders. So that extraneous hardware, software and the associated cost of ownership can be minimized. This particular requirement can be categorized as the overall business optimization requirement.

Load testing

The purpose of any load test should be clearly understood and documented.
The following types of configurations are a must for load testing.
  1. The most frequently used user transactions have the potential to impact the performance of all of the other transactions if they are not efficient.
  2. At least one non-editing user transaction should be included in the final test suite, so that performance of such transactions can be differentiated from other more complex transactions.
  3. The more important transactions that facilitate the core objectives of the system should be included, as failure under load of these transactions has, by definition, the greatest impact.
  4. At least one editable transaction should be included so that performance of such transactions can be differentiated from other transactions.
  5. The observation of the optimum response time under huge number of virtual users for all the prospective requirements.
  6. The observation of the effective times for fetching of various records.
Important load testing tools are load runner, win runner and JMeter.

Stress testing

Stress testing is also sometimes referred to as torturous testing as it stresses the application under test with enormous loads of work such that the system fails .This helps in identifying breakdown points of the system.
Important stress testing tools are load runner, win runner and JMeter.
Most common occurring issues during database testing
  1. Significant amount of overhead could be involved in order to determine the state of the database transactions.
  2. Solution: The overall process planning and timing should be organized so that no time and cost based issues appear.
  3. New test data have to be designed after cleaning up of the old test data.
  4. Solution: A prior plan and methodology for test data generation should be at hand.
  5. An SQL generator is required to transform SQL validators in order to ensure the SQL queries are apt for handling the required database test cases.
  6. Solution: Maintenance of the SQL queries and their continuous updating is a significant part of the overall testing process which should be part of the overall test strategy.
  7. The above mentioned prerequisite ensure that the set-up of the database testing procedure could be costly as well as time consuming.
  8. Solution: There should be a fine balance between quality and overall project schedule duration.
Myths or Misconceptions related to Database Testing.
  1. Database Testing requires plenty of expertise and it is a very tedious job
  • Reality: Effective and efficient Database testing provides long-term functional stability to the overall application thus it is necessary to put in hard work behind it.
  1. Database testing adds extra work bottleneck
  • Reality: On the contrary, database testing adds more value to the overall work by finding out hidden issues and thus pro-actively helping to improve the overall application.
  1. Database testing slows down the overall development process
  • Reality: Significant amount of database testing helps in the overall improvement of quality for the database application.
  1. Database testing could be excessively costly
  • Reality: Any expenditure on database testing is a long-term investment which leads to long-term stability and robustness of the application. Thus expenditure on database testing is necessary.

Best Practices

  • All data including the metadata as well as the functional data needs to be validated according to their mapping by the requirement specification documents.
  • Verification of the test data which has been created by / in consultation with the development team needs to be validated.
  • Validation of the output data by using both manual as well as automation procedures.
  • Deployment of various techniques such as the cause effect graphing technique, equivalence partitioning technique and boundary-value analysis technique for generation of required test data conditions.
  • The validation rules of referential integrity for the required database tables also need to be validated.
  • The selection of default table values for validation on database consistency is a very important concept Whether the log events have been successfully added in the database for all required login events
  • Does scheduled jobs execute in timely manner?
  • Take timely backup of Database.