Tuesday, November 26, 2019

Tell me about yourself



                                     
 Tell me about yourself
Tell Me About Yourself

            My name is Areesha Rahman, I have a BA in Communication Design.
 I have been working in the software industry over 6 (Six) years, and experienced in Manual as well as Automated testing of web and client/ server applications. During these six years I have performed variety types of testing. Such as functional non Function
 As a software test engineer, I was involved in all stages of Software Development Life Cycle. In my previous projects, I had to learn about the application by reading documentation, use cases, design documents, design mockup, and business rules. In my last project with company name, I was involved in requirement assessment after receiving the requirements and design documents. I did the requirement assessment based on:
· Is the requirement specific, explicit, and detailed?
· Is the requirement consistent with other requirements?
· Is the requirement factually correct?
· Is the requirement objectively testable?
· Is the requirement unambiguous?
· Is the requirement testable in available test environment?
· Does the requirement define all references?
· Is the requirement complete?
· Does the requirement specify an exception condition?

            After getting the final requirement
I exported them to the requirement section of  QC………
Convert requirements to tests.
I developed test cases or design steps for tests under Test Plan section.
Create templates for common steps like log in / log out.
Performed requirement coverage – linking between requirements and test cases (So that you could get exact report like how many requirements passed or fails)
Attached necessary screen shots docs with the test cases.
Create test set on test lab section and also Templates if necessary.
Import tests in the test set.
Execute and run tests under test lab.
Logs or documents defects using add defects icon; Filled up all required fields like assign by, assign to, severity, priority, reproducible, version, summary, steps to reproduce the defects etc. (talk about defect life cycle).
Track, monitor, search, and filter defects.
Generate graphs and reports for defects, requirement coverage etc.

            While doing with the manual testing, I worked to find the right candidate for regression testing by risk analysis. As we all know, retesting a whole application is never possible because of budget, time, and resource constraint. So we need to select / choose the right candidates for regression testing so that there are no adverse effects in any part of the application. Sometimes bug hides within the bag. So if we do regression testing we can eliminate more bugs. I selected the right candidates for regression testing by considering:
Which functionality is most important to the project intended?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which parts of the code are most complex, and thus most subject to errors?
Which part of the application were developed in rush and panic mode?
Which part of the requirements and design are unclear or poorly thought out?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?

I consolidated all requirements to multiple scenarios, and developed the scripts using QTP/ Winrunner. After developing the script, I did a script enhancement by:
· Inserting checkpoints into the test
· Parameterizing tests
· Creating Output Values
· Using Regular expression
· Creating multiple actions, and reused functions
· Debugging Scripts
· Synchronizing tests
· Using managing Recovery Scenarios, and Optional steps
· Creating virtual objects
· Configuring Smart Identification
· Working with Descriptive Programming
· Working with User defined Functions
· Integrating QTP with Quality Center

After Script enhancement, I was involved running the scripts and analyzing the results.

            After Regression testing, I was involved in Performance testing using LoadRunner that tests load on applications by emulating an environment in which multiple users work concurrently. LoadRunner replaces real users with Virtual users. I was involved picking up the business processes that we will do performance testing on. The performance testing candidates were picked for the following business processes:
Mission critical
Heavy throughput and
Dynamic content
            As soon as the business processes are picked, I used the standard performance testing life cycle, which follows the following:
Gathering requirements
Creating test plan
Once the test environment is available then
Validate the test environment
Start scripting
Script shakeout (Enhancement)
Inserting transaction points
Inserting Comments
Creating Checkpoints
Rendezvous points
Parameterizing the Scripts
Configuring Run time Settings
Sending Output Message
Correlating the scripts
Creating Custom Vuser Scripts
Test execution {Baseline and Stress}
Create scenarios using the LR Controller: both Manual and Goal oriented Scenarios.
Establish Monitor  s to monitor the test
analyze the test
Hits/ sec
Throughput
Closeout Documents

            I have experience in writing test plan, and test cases for both functional testing, and performance testing. I was involved in writing The functional test plans using the IEEE standards format that contains:
Test plan identifier
References
Introduction
Test Items
Software Risk issues
Features to be tested
Features not to be tested
Approach / Test Strategy
Item Pass/Fail Criteria
Suspension Criteria and Resumption Requirements
Test Deliverables
Remaining test tasks
Environmental needs
Staffing and training needs
Responsibilities
Schedule
Planning risks and contingencies
Approvals
Glossary

            I was also involved writing performance test plan. A performance test plan contains executive summary that describe the objective of the test, strategy, transactions, purpose of the test, and the scope of the test that lists the application functions to be tested. The Organization and Responsibilities section describe roles and responsibilities of individuals. The Production Environment Description section contains the architecture and hardware specification, and expected increase in load volumes for long term and short term. Test Environment Description section contains environment architectures for all environments used and describes server hardware specifications. The plan describes assumption, constraints, risks, and contingencies. The Test Approach section describes details on methodology, and description of each test steps, and execution. The performance test plan also lists Monitoring Criteria with the tools to be used, and the environment elements to be monitored. The plan describes test entry and exit criteria, pass/fail criteria, application response time, hardware utilization, network utilization criteria. The Test Execution section contains script description with run time settings. The Project Schedule section lists the major milestone dates for the project. The Test Cycle sections describes the execution cycles and specify the number of Vusers for Baseline, Stress, and endurance testing cycles and the goal of each cycles. The plan also contains acronyms and definitions, deliverables, and appendices.

            I have also developed Requirement Analysis Report (RAR), Test Analysis report (TAR), Level of Efforts (LOE – how many testers you need and how many hours). I developed test cases from requirement documents as well as from use cases.

            Many of my projects I was involved in all stages of software development life cycle and have deeper understanding of the development processes in different Methodologies, such as waterfall, iterative, and agile methodology. My educational background of bachelor in computer science and working experience as a software developer is a great plus on understanding SDLC. I worked as a software developer and have in depth knowledge of programming with J2EE, C++, RDBMS and SQL. I worked with different operating systems such as UNIX and LINUX.

            I am very capable of leading a team, and handle complex situation. One project I was leading 4 people, and one person quit close to the deadline. I had to work hard to cover that, and motivate the team. There are situation when new requirements start coming. I overcame those complex situations, and will be able to contribute a lot for your organization.


Test Automation Interview Questions for beginners and advanced level candidates are listed in this tutorial.

 Test Automation Interview Questions for beginners and advanced level candidates are listed in this tutorial.

Test automation plays a very important role in the entire software lifecycle. Most of the time when we want to prepare for an automation testing interview, we only focus on tool-specific questions.

However, we should also consider the fact that learning and knowing the tool is just a mean and it’s not the ultimate goal.

Automation Testing Interview Questions
Thus, whenever we are preparing for an automation tester interview, we have to consider “Automation” as a whole and focus on the framework and the steps involved.

We all know that software testing is a very important part of software development. But, with the rapidly growing software development methodologies and environments, it becomes difficult to manually test everything for an application within a limited time along with cost constraints.

Thus Automation testing is rapidly growing in the market to speed up to the development pace. This tutorial includes top interview questions on Automation testing.

I have tried to cite down the short and quick questions which are very much specific to the automation as a whole and are not specific to any “tool”.

Top 39 Automation Testing Interview Questions And Answers
We have covered basic test automation questions as well as some advanced questions for intermediate to expert level candidates of up to 2 to 5 years experience.

Q #1) What is Automation?

Answer: Automation is any action which can reduce human efforts.

Q #2) What is Automation testing?

Answer: The process of using special software tools or scripts to perform testing tasks such as entering data, executing the test steps and comparing the results etc. is known as Automation testing.

Q #3) What all things can you automate?

Answer:

Regression test suite
Smoke / Sanity test suite
Build deployment
Test data creation
Automating behind the GUI like testing of APIs and methods.
Q #4) When is Automation testing useful?

Answer: Automation testing is useful in the following scenarios:

a) Regression testing: In case of a bug fix or new module implementation, we have to make sure that the already implemented or unchanged functionality is not affected. In this case, we end up running the regression test case multiple times.

For Example: After each change request or bug fix, after each iteration in case of incremental development approach etc.

b) Non-functional Testing: Testing the non-functional aspects of an application.

For Example: Load testing or performance testing etc, are very difficult for humans to track and analyze.

c) Complex calculation checks or tests scenarios that are prone to human errors.

d) Repeated execution of same tests: Sometimes we have to run the same set of test case for a different set of data or after each build release or on multiple hardware, software or combination of both.

Automating the test cases in the above scenarios helps in achieving the speed of testing and minimizing human errors.

Q #5) How do you identify the test cases which are suitable for automation?

Answer: Identifying the appropriate test cases for automation is the most important step towards automation.

Q #6) Can you achieve 100% automation?

Answer: 100% automation would be difficult to achieve because there would be many edge test cases and some cases which are executed seldom. Automating these cases which are not executed that often will not add value to the automated suite.

Q #7) How to decide the tool that one should use for Automation testing in their projects?

Answer: In order to identify the tool for Automation testing in your project:

a) Understand your project requirements thoroughly and identify the testing scenarios that you want to automate.

b) Search for the list of tools that support your project's requirements.

c) Identify your budget for the automation tool. Select the tools within your budget.

d) Identify if you already have skilled resources for the tools. If you don't have the necessary skilled resources then identify the cost for training the existing resources or hiring new resources.

e) Now compare each tool for key criteria like:

How easy is it to develop and maintain the scripts for the tool.
Can a non-technical person also execute the test cases with little training?
Does the tool support different types of platforms like web, mobile, desktop etc based on your project requirements?
Does the tool have a test reporting functionality? If not, is it easily configurable for the tool?
How is the tool for cross-browser support for web-based applications?
How many different testing types can this tool support?
How many languages does the tool support?
f) Once you have compared the tools, select the tool which is within your budget and supports your project requirements, and gives you more advantages based on the key criteria mentioned above.

Q #8) Currently I do not have any automation in place in my project, but now I want to implement automation, what would be my steps?

Answer:

First, identify which type of testing/test cases you want to automate.
Identify the tool
Design the framework
Create utility files and environment files.
Start scripting
Identify and work on the reporting.
Allocating time for enhancing and maintaining the scripts.
Steps required for getting Automation Testing in place for a project include:

Understand the advantages and disadvantages of automation testing and identify the test scenarios which are suitable for automation.
Select the automation tool that is best suited for automating the identified scenarios
Find the tool expert to help in setting up the tool and required environment for executing the test cases using the tool.
Train the team so that they can write scripts in the programming language that the tool supports.
Create the test framework or identify the already existing one that meets your requirements.
Write an execution plan for OS, browsers, mobile devices etc.
Write programming scripts for manual test cases to convert them into automated test cases.
Report the test case status by using the reporting feature of the tool.
Maintain the scripts for ongoing changes or new features.
Q #9) How do you decide which tool you have to use?

Answer: Concluding which tool is best suitable for the project requires a lot of brainstorming and discussions.

Q #10) Once you identify the tool what would be your next steps?

Answer: Once we finalize the tool, our next step would be to design the framework.

Q #11) What is a framework?

Answer: A framework is a set of the structure of the entire automation suite. It is also a guideline, which if followed can result in a structure which is easy to maintain and enhance.

These guidelines include:

Coding standards
Handling the test data
Maintaining and handling the elements (object repository in QTP)
Handling of environment files and properties file
Reporting of data
Handling logs
Q #12) What are the attributes of a good framework?

Answer: The characteristics include:

Modular – The framework should be adaptable to change. Testers should be able to modify the scripts as per the environment or login information change.
Reusable – The commonly used methods or utilities should be written in a common file which is accessible to all the scripts.
Consistent – The suite should be written in a consistent format by following all the accepted coding practices.
Independent – The scripts should be written in such a way that they are independent of each other. In case one test fails, it should not hold back the remaining test cases (unless it is a login page)
Logger – It is good to have implemented the logging feature in the framework. This would help in case our scripts run for longer hours (say nightly mode), if the script fails at any point of time, having the log file will help us to detect the location along with the type of the error.
Reporting – It is good to have the reporting feature automatically embedded into the framework. Once the scripting is done, we can have the results and reports sent via email.
Integration – Automation framework should be such that it is easy to integrate with other application like continuous integration or triggering the automated script as soon as the build is deployed.
Q #13) Can you do without a framework?

Answer: Frameworks are guidelines and not mandatory rules, so we can do without a framework, but if we create it and follow it, enhancing and maintaining would be easy to implement.

Q #14) What are the different types of an automation tool that you are aware of?

Answer: Open source tool like Selenium, JMeter etc.

Paid tools like QTP, Load Runner, Ranorex, RFT, and Rational Robot.

Q #15) What generally is the structure of a framework?

Answer: Normally the structure should have – (It would differ from project to project)

A “src” (source) folder having the actual test scripts.
A”lib” (library) folder having all the libraries and common methods.
A “class” folder having all the class file (in-case using java).
A “log” folder having the log file(s).
A file/folder having all the web element Ids.
A file containing the URL, environment and login information.
Q #16) Where will you maintain information like URL, login, password?

Answer: This information should always be maintained in a separate file.

Q #17) Why do you want to keep this kind of information in a separate file and not directly in the code?

Answer: URL, Login, and passwords are the kind of fields which are used very often and these change as per the environment and authorization. In case we hardcode it into our code, we have to change it in every file which has its reference.

In case if there are more than 100 files, then it becomes very difficult to change all the 100 files and this, in turn, can lead to errors. So this kind of information is maintained in a separate file so that updating becomes easy.

Q #18) What are the different types of frameworks?

Answer: Different types of the framework include:

Keyword-driven framework
Data-Driven framework
Hybrid Framework
Linear Scripting
Q #19) Can you tell some good coding practices while automation?

Answer: Some of the good coding practices include:

Add appropriate comments.
Identify the reusable methods and write it in a separate file.
Follow the language-specific coding conventions.
Maintain the test data in a separate file.
Run your scripts regularly.
Q #20) Any kind of test which you think should not be automated?

Answer:

Tests which are seldom executed.
Exploratory testing
Usability testing
Test which is executed quickly when done manually.
Q #21) Do you think that testing can be done only at the UI level?

Answer: Today as we are moving to the Agile mode, testing is not limited to the UI layer. Early feedback is imperial for an agile project. If we concentrate only on the UI layer, we are actually waiting until the UI is developed and available to test.

Rather we can test even before the UI is actually developed. We can directly test the APIs or the methods using tools like Cucumber and FitNesse.

In this way, we are giving the feedback much early and are testing even before the UI is developed. Following this approach will help us to test only the GUI aspect of small cosmetic changes or some validations on the UI and will help the developers by giving more time to fix the bugs.

Q #22) How do you select which automation tool is best suited for you?

Answer: Selecting the automation tool depends upon various factors like:

The scope of the application which we want to automate.
Management overhead like cost and budget.
Time to learn and implement the tool.
Type of support available for the tool.
Limitation of the tool
Q #23) What do you think holds the testers back to do automation? Is there a way to overcome it?

Answer: The major hurdle for testers is to learn programming/coding when they want to automate. Since testers do not code, adapting to coding is a bit challenging for testers.

We can overcome it by:

Collaborating with developers when automating.
Considering that automation is the responsibility of the whole team and not only of the testers.
Giving a dedicated time and focus on automation.
Getting proper management support.
You can save these automation testing interview questions as a pdf and print for further reading.

Q #24) What is an Automation testing framework?

Answer: A framework, in general, is a set of guidelines. A set of guidelines, assumptions, concepts and coding practices for creating an execution environment in which the tests will be automated, is known as an Automation testing framework.

An automation testing framework is responsible for creating a test harness with a mechanism to connect with the application under test, take input from a file, execute the test cases and generate the reports for test execution. An automation testing framework should be independent of the application and it should be easy to use, modify or extend.

Q #25) What are the important modules of an automation testing framework?

Answer: Important modules of an Automation testing framework are:

Test Assertion Tool: This tool will provide assert statements for testing the expected values in the application under test. E.g. TestNG, Junit etc.
Data Setup: Each test case needs to take the user data either from the database or from a file or embedded in the test script. Frameworks data module should take care of the data intake for test scripts and the global variables.
Build Management Tool: Framework needs to be built and deployed for the use of creating test scripts.
Continuous integration tool: With CICD (continuous integration and continuous development) in place, continuous integration tool is required for integrating and deploying the changes done in the framework at each iteration.
Reporting tool: A reporting tool is required to generate a readable report after the test cases are executed for a better view of the steps, results, and failures.
Logging tool: The logging tool in framework helps in better debugging of the error and bugs.
Q #26) Explain some Automation testing tools.

Answer: Some of the famous Automation testing tools are explained below:

(i) Selenium: Selenium is a test framework for web application automation testing. It supports multiple browsers and is OS independent. Selenium also supports various programming languages like java, c#, PHP, Ruby, and Perl etc.

Selenium is an open source set of libraries which can be used to develop additional test frameworks or test scripts for testing web-based applications.

(ii) UFT: Unified functional testing is a licensed tool for functional testing. It provides a wide range of features like APIs, web services etc and also supports multiple platforms like desktops, web, and mobile. UFT scripts are written in visual basic scripting language.

(iii) Appium: Appium is an open source mobile application testing tool. It is used to automate testing on cross-platform, native, hybrid and web-based mobile applications. Appium automates any mobile application from any language with full access to APIs and DBs from the test code.

Appium is based on client-server architecture and has evolved from selenium.

(iv) Cucumber: Cucumber is an open source behavior-driven development tool. It is used for web-based application automation testing and supports languages like ruby, java, scala, groovy etc. Cucumber reads executable specification written in plain text and tests the application under test for those specifications.

For cucumber to understand the scenarios in plain text, we have to follow some basic syntax rules which are known as Gherkin.

(v) TestComplete: TestComplete is a licensed automated UI testing tool to test the application across different platforms like desktops, web, mobile etc. It provides flexibility to record a test case on one browser and run it on multiple browsers and thus supports cross browsers testing.

TestComplete has inbuilt object recognition algorithm which uniquely identifies an object and stores it in the repository.

Q #27) What are the different types of testing framework techniques?

Answer: There are four types of automation testing framework techniques.

They are:

#1) Modular Testing framework:

This framework is built on the concept of abstraction. In this framework, the tester creates scripts for each module of the application under test individually and then these scripts are combined in the hierarchical order to create large test cases.

It creates an abstraction layer between the modules, thus any modifications in test scripts for one module do not affect any other modules.

Advantages of this framework:

Easier maintenance and scalability of test cases.
Creating test cases by using already scripted modules is easier and faster.
Disadvantages:

Test cases have data embedded in them. Thus executing the same test script with different data is a big change at the script level.
#2) Data Driven Testing framework:

In Data-driven testing framework, the input data and the expected output data corresponding to the input data is stored in a file or database and automated script runs the same set of test steps for multiple sets of data. With this framework, we can run multiple test cases where only the input data differs and the steps of execution are the same.

Advantages:

Reduces the number of test scripts that are required to be executed. We execute the same script multiple times with different data.
Less coding for automation testing.
Greater flexibility for maintaining and fixing the bugs or enhancing the functionality.
Test data can be created even before the automated system for testing is ready.
Disadvantages:

Only similar test cases with the same set of execution steps can be combined for multiple sets of data. The different set of execution steps require a different test case.
#3) Keyword-Driven Testing framework:

It is an application independent testing framework which uses data tables and self-explanatory keywords. Keywords explain the actions to be performed on the application under test and data table provides the input and expected output data.

Keyword-based testing is an increment to data-driven testing.

Advantages:

Less coding and the same script can be used for multiple sets of data.
Automation expertise is not required for creating a test case using the already existing keywords for actions.
Same keywords can be used across multiple test cases.
Disadvantages:

This framework is more complicated as it needs to take care of the keyword actions and also the data input.
Test cases get longer and complex thereby affecting the maintainability of the same.
#4) Hybrid Testing framework:

This framework is a combination of all the above-mentioned testing frameworks ( Modular, data-driven, and keyword-driven).

In this framework, the test cases are developed from modular scripts by combining them in the modular testing framework. Each of the test cases uses a driver script which uses a data file as in data-driven framework and a keyword based action file.

Advantages:

Modular and easy to maintain.
Less coding can take care of more test cases.
One test case can be executed with multiple sets of data.
Disadvantages:

Complex to read, maintain and enhance.
Q #28) When do you prefer Manual testing over Automation testing?

Answer: We prefer manual testing over automation testing if:

The project is short-term and writing scripts will be time-consuming and costly when compared to manual testing.
Flexibility is required. Automated test cases are programmed and run in a specific way of configurations.
Usability testing needs to be performed.
Application/module are newly developed and have no previous test cases.
Ad-hoc or exploratory testing needs to be performed.
Q #29) Is Automation testing in agile Methodology useful or not?

Answer: Automation testing is useful for regression, smoke or sanity testing. All these types of testing in traditional waterfall model happen at the end of the cycle and sometimes if there are not many enhancements to the application, we might not even have to do regression testing.

Whereas, in agile methodology, every iteration requires executing the regression test case as a new functionality is added.

Also, the regression suite itself keeps growing after each sprint as the functional test cases of the current sprint module need to be added to the regression suite for the next sprint.

Thus, Automation testing in agile methodology is very useful and helps in achieving maximum test coverage in a lesser time of the sprint.

Q #30) List some advantages and disadvantages of Automation testing.

Answer:

Advantages:

Fewer Human resources
Reusability
More Test Coverage in less time
Reliability
Parallel execution of test cases
Fast
Disadvantages:

Development and maintenance time is more.
Tool Cost
Skilled resources are required.
Environment setup
Test Script debugging is an issue.
Q #31) List some advantages and disadvantages of Manual testing.

Answer:

Advantages:

No environment setup needed.
Programming knowledge is not required.
Recommended for dynamically changing requirements.
Allows human observation power to detect more bugs.
Cost is less for short-term projects.
Flexibility
Disadvantages:

Difficult to perform complex calculations.
Reusability
Time taking
High risk of human errors or mistakes.
More human resources are required.
Q #32) Can we do Automation testing without a framework? If yes, then why do we need a framework?

Answer: Yes, We can perform automation testing even without using a framework. We can just understand the tool that we are using for automation and program the steps in the programming language that tools support.

If we automate test cases without a framework then there won't be any consistency in the programming scripts for test cases.

A framework is required to give a set of guidelines that everyone has to follow to have maintained readability, reusability, and consistency in the test scripts. A framework also provides one common ground for reporting and logging functionality.

Q #33) How will you automate basic “login” functionality test cases for an application?

Answer: Assuming that the automation tool and framework is already in place of the test environment.

To test the basic “Login” functionality:

Understand the project requirement: Login functionality will have a username textbox, a password textbox, and a login button.
Identify the Test scenarios: For login functionality, the possible test scenarios are:
Blank username and password
Invalid username and password
A valid username and invalid password
Valid username and password
Prepare a Data input file with the data corresponding to each scenario.
Launch the tool from the program.
Identify the username field, password field, and the login button.
For each test scenario, get the data from the data file and enter into the corresponding fields. Program click on the login button after entering the data.
Validate the error message for negative scenarios and the success message for positive scenarios in the test script with the help of assertions.
Run the test suite and generate the report.
Q #34) Is Automation testing a Black box testing or White-box testing?

Answer: Automation testing is mostly a black box testing as we just program the steps that a manual tester performs for application under test without knowing the low-level design or code of the application.

Sometimes, automated test scripts need access to the database details that are used in the application under test or some more coding details and thus can be a type of white-box testing.

Thus automated testing can be both black or white box type of testing depending on the scenarios in which automation is performed.

Q #35) How many test cases have you automated per day?

Answer: Well, the number depends on the complexity of the test cases. When the complexity was limited, I was able to automate 5 to 6 test cases per day. Sometimes, I was able to automate only one test case for complex scenarios.

I have also broken down my test cases into different components like, take input, do the calculation, verify the output etc. in case of very complex scenarios and have taken 2 or more days.

Q #36) What factors determine the effectiveness of Automation testing?

Answer: Some of the factors that determine the effectiveness of automation testing are:

Time saved by running scripts over the manual execution of test cases.
Defects found
Test Coverage or code coverage
Maintenance time or development time
Stability of the scripts
Test Reusability
Quality of the software under test
Q #37) Which test cases can be automated?

Answer: Types of test cases which can be automated are:

(i) Smoke test cases: Smoke testing is also known as build verification testing. Smoke test cases are run every time when a new build is released to check the health of the build for acceptance to perform testing.

(ii) Regression Test Cases: Regression testing is the testing to ensure that previously developed modules are functioning as expected after a new module is added or a bug is fixed.

Regression test cases are very crucial in incremental software approach where a new functionality is added at each increment phase. In this case, regression testing is performed at each incremental phase.

(iii) Complex Calculation test cases: Test cases which involve some complex calculations to verify a field for an application fall into this category. Complex calculation results are more prone to human errors hence when automated they give accurate results.

(iv) Data-driven test cases: Test cases which have the same set of steps and run multiple times with the change of data are known as data-driven test cases. Automated testing for these kinds of test cases is quick and cost-effective.

(v) Non-functional test cases: Test cases like load tests and performance tests require a simulated environment with multiple users and multiple hardware or software combinations.

Setting up multiple environments manually is impossible for each combination or number of users. Automated tools can easily create this environment to perform non-functional testing easily.

Q #38) What are the phases in Automation testing Life Cycle?

Answer: The phases in Automation testing life Cycle include:

The decision to perform automation testing.
Identify and learn about the automation tool.
Determine the scope of automation testing.
Design and develop a test suite.
Test Execution
Maintenance of test scripts.
phases in Automation testing Life Cycle
Q #39) What is an Automated test script?

Answer: An automated test script is a short program that is written in a programming language to perform a set of instructions on an application under test to verify if the application is as per the requirements.

This program when run, gives the test results as pass or fail to depend on if the application is as per the expectations.

Conclusion
These are the main questions that are independent of the automation tool or programming language. Automation testing interviews also include tool and programming language specific questions depending upon the tool that you have worked with.

Most of the test automation interview questions are centered on the framework you develop, so it is recommended that you create and understand your test framework thoroughly. When I am interviewing, and the candidate has answered my question on the framework, I also prefer asking a language specific question (core java in my case).

The questions start from basics of java to write the logic of some basic scenario like:

How would you extract a set of text from a given line?
How would you extract URL?
In any web page, at any frame, the number of links and its content change dynamically, how would you handle it?
How do you handle images and flash objects?
How do you find a word in a line?
Answers to all these test automation interview questions are very much specific to the tool/language that you are using for automating. So before you go for the interview, brush up your programming skills.

In case you did not get a chance to create your framework and someone else has created it, then make some time to understand it thoroughly before sitting for the interview.

Some tips for automation testing interviews would be:

Know your tool thoroughly.
Learn the locator techniques used by your tool.
Practice programming using the language which you use for automation testing.
Learn your framework and its components.
It’s always advantageous if you have been involved in the development of your framework. So, be thorough with the modules in the framework which you have worked on.
Hope these questions would be much useful for you to prepare for a test automation interview.

Recommended Reading
Interview Questions and Answers
ETL Testing Interview Questions and Answers
Some Interesting Software Testing Interview Questions
25 Best Agile Testing Interview Questions and Answers
Top 20 Most Important API Testing Interview Questions and Answers
Software Testing Questions and Answers (Part 1)
QA Testing Tools
Top 30 Security Testing Interview Questions and Answers

Saturday, November 9, 2019

Types Of Software Testing: Different Testing Types With Details

What are the different types of Software Testing?
We, as testers are aware of the various types of Software Testing such as Functional Testing, Non-Functional Testing, Automation Testing, Agile Testing, and their sub-types, etc.
Each of us would have come across several types of testing in our testing journey. We might have heard some and we might have worked on some, but not everyone has knowledge about all the testing types.
Different Types of Software Testing
Each type of testing has its own features, advantages, and disadvantages as well. However, in this article, I have covered mostly each and every type of software testing which we usually use in our day to day testing life.
Let’s go and have a look at them.

Different Types Of Software Testing

Given below is the list of some common types of Software Testing:
Functional Testing types include:
  • Unit Testing
  • Integration Testing
  • System Testing
  • Sanity Testing
  • Smoke Testing
  • Interface Testing
  • Regression Testing
  • Beta/Acceptance Testing
Non-functional Testing types include:
  • Performance Testing
  • Load Testing
  • Stress Testing
  • Volume Testing
  • Security Testing
  • Compatibility Testing
  • Install Testing
  • Recovery Testing
  • Reliability Testing
  • Usability Testing
  • Compliance Testing
  • Localization Testing
Let's see more details about these Testing types.
Types of Software Testing

#1) Alpha Testing

It is the most common type of testing used in the Software industry. The objective of this testing is to identify all possible issues or defects before releasing it into the market or to the user.
Alpha Testing is carried out at the end of the software development phase but before the Beta Testing. Still, minor design changes may be made as a result of such testing.
Alpha Testing is conducted at the developer’s site. In-house virtual user environment can be created for this type of testing.

#2) Acceptance Testing

An Acceptance Test is performed by the client and verifies whether the end to end the flow of the system is as per the business requirements or not and if it is as per the needs of the end-user. Client accepts the software only when all the features and functionalities work as expected.
It is the last phase of the testing, after which the software goes into production. This is also called User Acceptance Testing (UAT).

#3) Ad-hoc Testing

The name itself suggests that this testing is performed on an Ad-hoc basis i.e. with no reference to the test case and also without any plan or documentation in place for such type of testing.
The objective of this testing is to find the defects and break the application by executing any flow of the application or any random functionality.
Ad-hoc Testing is an informal way of finding defects and can be performed by anyone in the project. It is difficult to identify defects without a test case but sometimes it is possible that defects found during ad-hoc testing might not have been identified using existing test cases.

#4) Accessibility Testing

The aim of Accessibility Testing is to determine whether the software or application is accessible for disabled people or not.
Here, disability means deaf, color blind, mentally disabled, blind, old age and other disabled groups. Various checks are performed such as font size for visually disabled, color and contrast for color blindness, etc.

#5) Beta Testing

Beta Testing is a formal type of Software Testing which is carried out by the customer. It is performed in the Real Environment before releasing the product to the market for the actual end-users.
Beta Testing is carried out to ensure that there are no major failures in the software or product and it satisfies the business requirements from an end-user perspective. Beta Testing is successful when the customer accepts the software.
Usually, this testing is typically done by end-users or others. It is the final testing done before releasing an application for commercial purpose. Usually, the Beta version of the software or product released is limited to a certain number of users in a specific area.
So end-user actually uses the software and shares the feedback to the company. Company then takes necessary action before releasing the software to the worldwide.

#6) Back-end Testing

Whenever an input or data is entered on front-end application, it stores in the database and the testing of such database is known as Database Testing or Backend Testing.
There are different databases like SQL Server, MySQL, and Oracle, etc. Database Testing involves testing of table structure, schema, stored procedure, data structure and so on.
In Back-end Testing GUI is not involved, testers are directly connected to the database with proper access and testers can easily verify data by running a few queries on the database.
There can be issues identified like data loss, deadlock, data corruption etc during this back-end testing and these issues are critical to fixing before the system goes live into the production environment

#7) Browser Compatibility Testing

It is a subtype of Compatibility Testing (which is explained below) and is performed by the testing team.
Browser Compatibility Testing is performed for web applications and it ensures that the software can run with the combination of different browser and operating system. This type of testing also validates whether web application runs on all versions of all browsers or not.

#8) Backward Compatibility Testing

It is a type of testing which validates whether the newly developed software or updated software works well with the older version of the environment or not.
Backward Compatibility Testing checks whether the new version of the software works properly with file format created by an older version of the software; it also works well with data tables, data files, data structure created by the older version of that software.
If any of the software is updated then it should work well on top of the previous version of that software.

#9) Black Box Testing

Internal system design is not considered in this type of testing. Tests are based on the requirements and functionality.
Detailed information about the advantages, disadvantages, and types of Black box Testing can be seen here.

#10) Boundary Value Testing

This type of testing checks the behavior of the application at the boundary level.
Boundary Value Testing is performed for checking if defects exist at boundary values. Boundary Value Testing is used for testing a different range of numbers. There is an upper and lower boundary for each range and testing is performed on these boundary values.
If testing requires a test range of numbers from 1 to 500 then Boundary Value Testing is performed on values at 0, 1, 2, 499, 500 and 501.

#11) Branch Testing

It is a type of White box Testing and is carried out during Unit Testing. Branch Testing, the name itself suggests that the code is tested thoroughly by traversing at every branch.

#12) Comparison Testing

Comparison of a product's strength and weaknesses with its previous versions or other similar products is termed as Comparison Testing.

#13) Compatibility Testing

It is a testing type in which it validates how software behaves and runs in a different environment, web servers, hardware, and network environment.
Compatibility testing ensures that software can run on a different configuration, different database, different browsers, and their versions. Compatibility testing is performed by the testing team.

#14) Component Testing

It is mostly performed by developers after the completion of unit testing. Component Testing involves testing of multiple functionalities as a single code and its objective is to identify if any defect exists after connecting those multiple functionalities with each other.

#15) End-to-End Testing

Similar to system testing, End-to-End Testing involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

#16) Equivalence Partitioning

It is a testing technique and a type of Black Box Testing. During this Equivalence Partitioning, a set of the group is selected and a few values or numbers are picked up for testing. It is understood that all values from that group generate the same output.
The aim of this testing is to remove redundant test cases within a specific group which generates the same output but not any defect.
Suppose, the application accepts values between -10 to +10 so using equivalence partitioning the values picked up for testing are zero, one positive value, one negative value. So the Equivalence Partitioning for this testing is  -10 to -1, 0, and 1 to 10.

#17) Example Testing

It means real-time testing. Example Testing includes the real-time scenario, it also involves the scenarios based on the experience of the testers.

#18) Exploratory Testing

Exploratory Testing is informal testing performed by the testing team. The objective of this testing is to explore the application and looking for defects that exist in the application.
Sometimes it may happen that during this testing major defect discovered can even cause a system failure.
During Exploratory Testing, it is advisable to keep a track of what flow you have tested and what activity you did before the start of the specific flow.
An Exploratory Testing technique is performed without documentation and test cases.

#20) Functional Testing

This type of testing ignores the internal parts and focuses only on the output to check if it is as per the requirement or not. It is a Black-box type testing geared to the functional requirements of an application. For detailed information about Functional Testing click here.

#21) Graphical User Interface (GUI) Testing

The objective of this GUI Testing is to validate the GUI as per the business requirement. The expected GUI of the application is mentioned in the Detailed Design Document and GUI mockup screens.
The GUI Testing includes the size of the buttons and input field present on the screen, alignment of all text, tables, and content in the tables.
It also validates the menu of the application, after selecting different menu and menu items, it validates that the page does not fluctuate and the alignment remains same after hovering the mouse on the menu or sub-menu.

#22) Gorilla Testing

Gorilla Testing is a testing type performed by a tester and sometimes by the developer the as well. In Gorilla Testing, one module or the functionality in the module is tested thoroughly and heavily. The objective of this testing is to check the robustness of the application.

#23) Happy Path Testing

The objective of Happy Path Testing is to test an application successfully on a positive flow. It does not look for negative or error conditions. The focus is only on the valid and positive inputs through which application generates the expected output.

#24) Incremental Integration Testing

Incremental Integration Testing is a Bottom-up approach for testing i.e continuous testing of an application when new functionality is added. Application functionality and modules should be independent enough to test separately. This is done by programmers or by testers.

#25) Install/Uninstall Testing

Installation and Uninstallation Testing is done on full, partial, or upgrade install/uninstall processes on different operating systems under different hardware or software environment.

#26) Integration Testing

Testing of all integrated modules to verify the combined functionality after integration is termed as Integration Testing.
Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

#27) Load Testing

It is a type of Non-Functional Testing and the objective of Load Testing is to check how much load or maximum workload a system can handle without any performance degradation.
Load Testing helps to find the maximum capacity of the system under specific load and any issues that cause software performance degradation. Load testing is performed using tools like JMeter, LoadRunner, WebLoad, Silk performer, etc.

#28) Monkey Testing

Monkey Testing is carried out by a tester assuming that if the monkey uses the application then how random input, values will be entered by the Monkey without any knowledge or understanding of the application.
The objective of Monkey Testing is to check if an application or system gets crashed by providing random input values/data. Monkey Testing is performed randomly and no test cases are scripted and it is not necessary to
Monkey Testing is performed randomly and no test cases are scripted and it is not necessary to be aware of the full functionality of the system.

#29) Mutation Testing

Mutation Testing is a type of white box testing in which the source code of one of the program is changed and verifies whether the existing test cases can identify these defects in the system.
The change in the program source code is very minimal so that it does not impact the entire application, only the specific area having the impact and the related test cases should able to identify those errors in the system.

#30) Negative Testing

Testers having the mindset of “attitude to break” and using Negative Testing they validate that if system or application breaks. A Negative Testing technique is performed using incorrect data, invalid data or input. It validates that if the system throws an error of invalid input and behaves as expected.

#31) Non-Functional Testing

It is a type of testing for which every organization having a separate team which usually called as Non-Functional Test (NFT) team or Performance team.
Non-Functional Testing involves testing of non-functional requirements such as Load Testing, Stress Testing, Security, Volume, Recovery Testing, etc. The objective of NFT testing is to ensure whether the response time of software or application is quick enough as per the business requirement.
It should not take much time to load any page or system and should sustain during peak load.

#32) Performance Testing

This term is often used interchangeably with ‘stress' and ‘load' testing. Performance Testing is done to check whether the system meets the performance requirements. Different performance and load tools are used to do this testing.

#33) Recovery Testing

It is a type of testing which validates how well the application or system recovers from crashes or disasters.
Recovery Testing determines if the system is able to continue the operation after a disaster. Assume that application is receiving data through the network cable and suddenly that network cable has been unplugged.
Sometime later, plug the network cable; then the system should start receiving data from where it lost the connection due to network cable unplugged.

#34) Regression Testing

Testing an application as a whole for the modification in any module or functionality is termed as Regression Testing. It is difficult to cover all the system in Regression Testing, so typically Automation Testing Tools are used for these types of testing.

#35) Risk-Based Testing (RBT)

In Risk-Based Testing, the functionalities or requirements are tested based on their priority. Risk-Based Testing includes testing of highly critical functionality, which has the highest impact on business and in which the probability of failure is very high.
The priority decision is based on the business need, so once priority is set for all functionalities then high priority functionality or test cases are executed first followed by medium and then low priority functionalities.
The low priority functionality may be tested or not tested based on the available time.
The Risk-Based Testing is carried out if there is insufficient time available to test entire software and software needs to be implemented on time without any delay. This approach is followed only by the discussion and approval of the client and senior management of the organization.

#36) Sanity Testing

Sanity Testing is done to determine if a new software version is performing well enough to accept it for a major testing effort or not. If an application is crashing for the initial use then the system is not stable enough for further testing. Hence a build or an application is assigned to fix it.

#37) Security Testing

It is a type of testing performed by a special team of testers. A system can be penetrated by any hacking way.
Security Testing is done to check how the software or application or website is secure from internal and external threats. This testing includes how much software is secure from the malicious program, viruses and how secure and strong the authorization and authentication processes are.
It also checks how software behaves for any hackers attack and malicious programs and how software is maintained for data security after such a hacker attack.

#38) Smoke Testing

Whenever a new build is provided by the development team then the Software Testing team validates the build and ensures that no major issue exists.
The testing team ensures that the build is stable and a detailed level of testing is carried out further. Smoke Testing checks that no show stopper defect exists in the build which will prevent the testing team to test the application in detail.
If testers find that the major critical functionality is broken down at the initial stage itself then testing team can reject the build and inform accordingly to the development team. Smoke Testing is carried out to a detailed level of any Functional or Regression Testing.

#39) Static Testing

Static Testing is a type of testing which is executed without any code. The execution is performed on the documentation during the testing phase.
It involves reviews, walkthrough, and inspection of the deliverables of the project. Static Testing does not execute the code instead of the code syntax, naming conventions are checked.
Static Testing is also applicable for test cases, test plan, design document. It is necessary to perform static testing by the testing team as the defects identified during this type of testing are cost-effective from the project perspective.

#40) Stress Testing

This testing is done when a system is stressed beyond its specifications in order to check how and when it fails. This is performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to the system or database load.

#41) System Testing

Under System Testing technique, the entire system is tested as per the requirements. It is a Black-box type Testing that is based on overall requirement specifications and covers all the combined parts of a system.

#42) Unit Testing

Testing of an individual software component or module is termed as Unit Testing. It is typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. It may also require developing test driver modules or test harnesses.

#43) Usability Testing

Under Usability Testing, User-friendliness check is done. The application flow is tested to know if a new user can understand the application easily or not, Proper help documented if a user gets stuck at any point. Basically, system navigation is checked in this testing.

#44) Vulnerability Testing

The testing which involves identifying weakness in the software, hardware and the network is known as Vulnerability Testing. Malicious programs, the hacker can take control of the system, if it is vulnerable to such kind of attacks, viruses, and worms.
So it is necessary to check if those systems undergo Vulnerability Testing before production. It may identify critical defects, flaws in the security.

#45) Volume Testing

Volume Testing is a type of Non-Functional Testing performed by the Performance Testing team.
The software or application undergoes a huge amount of data and Volume Testing checks the system behavior and response time of the application when the system came across such a high volume of data. This high volume of data may impact the system’s performance and speed of the processing time.

#46) White Box Testing

White Box Testing is based on the knowledge about the internal logic of an application's code.
It is also known as Glass box Testing. Internal software and code working should be known for performing this type of testing. Under these tests are based on the coverage of code statements, branches, paths, conditions, etc.

Conclusion

The above-mentioned Software Testing Types are just a part of testing. However, there is still a list of more than 100+ types of testing, but all testing types are not used in all types of projects. So I have covered some common Types of Software Testing which are mostly used in the testing life cycle.
Also, there are alternative definitions or processes used in different organizations, but the basic concept is the same everywhere. These testing types, processes, and their implementation methods keep changing as and when the project, requirements, and scope changes.