Tuesday, June 7, 2016

Approach to writing Manual Test cases/Scripts


I am in the process of writing Manual Test Scripts for a system in System Testing Phase. A comprehensive set of Use Cases have been prepared by the Business Analysts on team. From these Test Scenarios and Test Conditions have been derived.
The system is a multi component request processing system where a user at one end can request a service from a devices at at the other end. The system developed processes these requests (authenticates user, authorises requests, routes requests to correct device, etc...). There are also different data variables involved - different Service users, different device alerts, different device statuses, etc... These will also need to be considered when testing.
So what is the best way of writing the scripts? Each use case has multiple steps in the flow. Do we write a test for each step in the flow or for for the whole use case? I wrote tests for each step in the flow, but in the end it looked very repetitive. I could have written one test for the whole use case and just change the test data (but is this really one test?)An example use case would be 'Process request'. A sample step in the flow is 'Acknowledge request from user' and another would be ' Log request'. Should a separate test be written for each of these parts of the flow? Or one test for 'Process Request'
I dont know If I'm looking in the wrong places, but I usually find lots of articles, docs on test planning, test preparation,etc.. but little on writing effective manual test cases/scripts.
Thanks
shareedit

3 Answers

In my experience with documenting system tests, I've found a multi-layered approach works. I really like Microsoft Test Manager for this because of two things: the ability to define input parameters for manual tests and the concept of shared test steps which can be used by any test case.
You don't mention if you're using a test case tool, word, excel or some other method of documentation, but you can adapt what I'm describing to any tool.
  • Work top-down: I start at the use case or acceptance criteria if there are no test scenarios or test conditions defined, and treat each scenario as a high level test in a larger test suite.
  • Identify and extract repetitions: In any moderately complex system there will be a lot of repeated actions, either with identical data or with near-identical data. I aim to extract anything that is going to happen more than once into its own unit (either a test that gets linked as a prerequisite to another test or as shared test steps) and nest as deep as I have to. For instance, with login credentials, I'll define a test or shared steps for "successful admin login" which contains admin login credentials, a link to "successful login" shared steps, and the expected view or data returned (this may be in the generic successful login). The "successful login" contains even more generic "login" shared steps with no indication of what happens on completion - that's handled by the success/fail test steps. It's effectively refactoring for DRY in your manual test cases.
  • Define data separately: I prefer to have my test data as a separate source than the test cases, and reference it. The precise form of the data will vary, but I've used database, zipped files containing data sets, text files, XML... Whatever works. The key thing here is that it's something I can reference from any test case and reuse infinitely.
  • "Just in time" detailing: Generally speaking, I don't go into details in my test cases until I need to. When I'm initially creating them, I'll keep it basic - a test case might be "logged in admin requests A from device Z, all OK" (this usually corresponds closely to any identified scenarios). Then I'll break this down to a series of smaller tests: "1. log in as admin; 2. send request A. 3. check response from device Z". From that, I'll reference the more detailed repeated items - so for instance I'll have a reference to the exact structure of the request being sent. I rarely if ever go as far as "click this button" type test steps.
  • Leave room for exploration: Rather than script manual tests in extreme detail, I prefer to work at a level that allows the tester to choose different ways of entering data or performing actions. I'll usually include mention of critical actions to use (such as "check each defined keyboard shortcut triggers the defined action in document X page Y") - this works better when testers are familiar with the application and/or are experienced testers.
  • Steel thread first: This should go without saying, but still... My priority is always to start with the functionality defined in the use cases, and make sure that works under the conditions which are expected and/or defined. I don't even consider anything outside that boundary until I know that much is working. Often I don't detail other tests until after I've got the steel threat sorted (because there is rarely time to test as completely as I'd like and if I had a dollar for every time I've been unable to test anything outside steel thread I'd have a whole lot more money than I do).
  • Remember the goal: The reason for your tests is to provide the business people running the project with information about the state of the project. Testers aren't the gatekeepers - we don't have the perspective. Your tests should be designed to give as much information as possible about the project's state in the shortest possible time. In order to do this, you'll need to identify the most critical scenarios (which can be interesting when you're in an environment where everything is critical - been there...) and prioritize your tests accordingly.
shareedit
   
Great answer Kate! – Justin Jan 8 '14 at 20:03
Full disclosure: I work for Rainforest QA. Because of the nature of our platform we have a strict but relatively simple technique for test writing to ensure that our customers get reliable and consistent results.
That said, our approach to test writing works well for manual testing even if you don't use the Rainforest platform. The key points of our test writing philosophy are:
1) Keep the scope of each test narrow. We encourage our users to limit each test case to a single, discrete process. For example, the scope of one test might be to create a new account, and another would be to login to an existing account. By keeping the scope for individual tests narrow, it's easier to tell if that particular interaction is working correctly, without getting bogged down with too much extraneous information.
2) Design tests for deterministic results. Each test is broken down into pairs of actions and questions. For example:
Action: Click the "Create an Account" button on the top left of the page.
Question: Did the "create an account" popup box appear?
Every question in the action-question pair must be answered with a yes or a no. This binary format is designed to reduce any fuzziness about the results of the test. This takes the burden of interpreting unclear results off the tester and helps to eliminate miscommunication about whether the test passed or not.
3) Make your tests easy for anyone to understand. When you work closely with a product, it can be easy to forget that your end users don't have the inside knowledge that you have. This is also especially important if you have a remote team of testers or use a crowdsourced platform like Rainforest to run your tests. But even if you don't, it's a good practice to keep because it forces you to put yourself in the shoes of your users.
You can check out more of our testwriting approach in this blogpost:https://www.rainforestqa.com/blog/2016-04-11-how-to-write-better-qa-tests