2
|
I am in the process of writing Manual Test Scripts for a system in System Testing Phase. A comprehensive set of Use Cases have been prepared by the Business Analysts on team. From these Test Scenarios and Test Conditions have been derived.
The system is a multi component request processing system where a user at one end can request a service from a devices at at the other end. The system developed processes these requests (authenticates user, authorises requests, routes requests to correct device, etc...). There are also different data variables involved - different Service users, different device alerts, different device statuses, etc... These will also need to be considered when testing.
So what is the best way of writing the scripts? Each use case has multiple steps in the flow. Do we write a test for each step in the flow or for for the whole use case? I wrote tests for each step in the flow, but in the end it looked very repetitive. I could have written one test for the whole use case and just change the test data (but is this really one test?)An example use case would be 'Process request'. A sample step in the flow is 'Acknowledge request from user' and another would be ' Log request'. Should a separate test be written for each of these parts of the flow? Or one test for 'Process Request'
I dont know If I'm looking in the wrong places, but I usually find lots of articles, docs on test planning, test preparation,etc.. but little on writing effective manual test cases/scripts.
Thanks
| ||
add a comment
|
8
|
In my experience with documenting system tests, I've found a multi-layered approach works. I really like Microsoft Test Manager for this because of two things: the ability to define input parameters for manual tests and the concept of shared test steps which can be used by any test case.
You don't mention if you're using a test case tool, word, excel or some other method of documentation, but you can adapt what I'm describing to any tool.
| ||||
|
1
|
Full disclosure: I work for Rainforest QA. Because of the nature of our platform we have a strict but relatively simple technique for test writing to ensure that our customers get reliable and consistent results.
That said, our approach to test writing works well for manual testing even if you don't use the Rainforest platform. The key points of our test writing philosophy are:
1) Keep the scope of each test narrow. We encourage our users to limit each test case to a single, discrete process. For example, the scope of one test might be to create a new account, and another would be to login to an existing account. By keeping the scope for individual tests narrow, it's easier to tell if that particular interaction is working correctly, without getting bogged down with too much extraneous information.
2) Design tests for deterministic results. Each test is broken down into pairs of actions and questions. For example:
Every question in the action-question pair must be answered with a yes or a no. This binary format is designed to reduce any fuzziness about the results of the test. This takes the burden of interpreting unclear results off the tester and helps to eliminate miscommunication about whether the test passed or not.
3) Make your tests easy for anyone to understand. When you work closely with a product, it can be easy to forget that your end users don't have the inside knowledge that you have. This is also especially important if you have a remote team of testers or use a crowdsourced platform like Rainforest to run your tests. But even if you don't, it's a good practice to keep because it forces you to put yourself in the shoes of your users.
You can check out more of our testwriting approach in this blogpost:https://www.rainforestqa.com/blog/2016-04-11-how-to-write-better-qa-tests
|