Tell Me About Yourself
My name is Areesha Rahman, I have a BA in Communication Design.
My name is Areesha Rahman, I have a BA in Communication Design.
I have
been working in the software industry over 6 (Six) years, and experienced in
Manual as well as Automated testing of web and client/ server applications.
During these six years I have performed variety types of testing. Such as
functional non Function
As a
software test engineer, I was involved in all stages of Software Development
Life Cycle. In my previous projects, I had to learn about the application by
reading documentation, use cases, design documents, design mockup, and business
rules. In my last project with company name, I was involved in requirement
assessment after receiving the requirements and design documents. I did the
requirement assessment based on:
· Is the requirement specific, explicit, and detailed?
· Is the requirement consistent with other requirements?
· Is the requirement factually correct?
· Is the requirement objectively testable?
· Is the requirement unambiguous?
· Is the requirement testable in available test environment?
· Does the requirement define all references?
· Is the requirement complete?
· Does the requirement specify an exception condition?
After getting the final requirement
I exported them to the requirement section of QC………
Convert requirements to tests.
I developed test cases or design steps for tests under Test Plan section.
Create templates for common steps like log in / log out.
Performed requirement coverage – linking between requirements and test cases (So that you could get exact report like how many requirements passed or fails)
Attached necessary screen shots docs with the test cases.
Create test set on test lab section and also Templates if necessary.
Import tests in the test set.
Execute and run tests under test lab.
Logs or documents defects using add defects icon; Filled up all required fields like assign by, assign to, severity, priority, reproducible, version, summary, steps to reproduce the defects etc. (talk about defect life cycle).
Track, monitor, search, and filter defects.
Generate graphs and reports for defects, requirement coverage etc.
While doing with the manual testing, I worked to find the right candidate for regression testing by risk analysis. As we all know, retesting a whole application is never possible because of budget, time, and resource constraint. So we need to select / choose the right candidates for regression testing so that there are no adverse effects in any part of the application. Sometimes bug hides within the bag. So if we do regression testing we can eliminate more bugs. I selected the right candidates for regression testing by considering:
Which functionality is most important to the project intended?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which parts of the code are most complex, and thus most subject to errors?
Which part of the application were developed in rush and panic mode?
Which part of the requirements and design are unclear or poorly thought out?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
· Is the requirement specific, explicit, and detailed?
· Is the requirement consistent with other requirements?
· Is the requirement factually correct?
· Is the requirement objectively testable?
· Is the requirement unambiguous?
· Is the requirement testable in available test environment?
· Does the requirement define all references?
· Is the requirement complete?
· Does the requirement specify an exception condition?
After getting the final requirement
I exported them to the requirement section of QC………
Convert requirements to tests.
I developed test cases or design steps for tests under Test Plan section.
Create templates for common steps like log in / log out.
Performed requirement coverage – linking between requirements and test cases (So that you could get exact report like how many requirements passed or fails)
Attached necessary screen shots docs with the test cases.
Create test set on test lab section and also Templates if necessary.
Import tests in the test set.
Execute and run tests under test lab.
Logs or documents defects using add defects icon; Filled up all required fields like assign by, assign to, severity, priority, reproducible, version, summary, steps to reproduce the defects etc. (talk about defect life cycle).
Track, monitor, search, and filter defects.
Generate graphs and reports for defects, requirement coverage etc.
While doing with the manual testing, I worked to find the right candidate for regression testing by risk analysis. As we all know, retesting a whole application is never possible because of budget, time, and resource constraint. So we need to select / choose the right candidates for regression testing so that there are no adverse effects in any part of the application. Sometimes bug hides within the bag. So if we do regression testing we can eliminate more bugs. I selected the right candidates for regression testing by considering:
Which functionality is most important to the project intended?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which parts of the code are most complex, and thus most subject to errors?
Which part of the application were developed in rush and panic mode?
Which part of the requirements and design are unclear or poorly thought out?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
I consolidated all requirements to multiple
scenarios, and developed the scripts using QTP/ Winrunner. After developing the
script, I did a script enhancement by:
· Inserting checkpoints into the test
· Parameterizing tests
· Creating Output Values
· Using Regular expression
· Creating multiple actions, and reused functions
· Debugging Scripts
· Synchronizing tests
· Using managing Recovery Scenarios, and Optional steps
· Creating virtual objects
· Configuring Smart Identification
· Working with Descriptive Programming
· Working with User defined Functions
· Integrating QTP with Quality Center
After Script enhancement, I was involved running the scripts and analyzing the results.
After Regression testing, I was involved in Performance testing using LoadRunner that tests load on applications by emulating an environment in which multiple users work concurrently. LoadRunner replaces real users with Virtual users. I was involved picking up the business processes that we will do performance testing on. The performance testing candidates were picked for the following business processes:
Mission critical
Heavy throughput and
Dynamic content
As soon as the business processes are picked, I used the standard performance testing life cycle, which follows the following:
Gathering requirements
Creating test plan
Once the test environment is available then
Validate the test environment
Start scripting
Script shakeout (Enhancement)
Inserting transaction points
Inserting Comments
Creating Checkpoints
Rendezvous points
Parameterizing the Scripts
Configuring Run time Settings
Sending Output Message
Correlating the scripts
Creating Custom Vuser Scripts
Test execution {Baseline and Stress}
Create scenarios using the LR Controller: both Manual and Goal oriented Scenarios.
Establish Monitor s to monitor the test
analyze the test
Hits/ sec
Throughput
Closeout Documents
I have experience in writing test plan, and test cases for both functional testing, and performance testing. I was involved in writing The functional test plans using the IEEE standards format that contains:
Test plan identifier
References
Introduction
Test Items
Software Risk issues
Features to be tested
Features not to be tested
Approach / Test Strategy
Item Pass/Fail Criteria
Suspension Criteria and Resumption Requirements
Test Deliverables
Remaining test tasks
Environmental needs
Staffing and training needs
Responsibilities
Schedule
Planning risks and contingencies
Approvals
Glossary
I was also involved writing performance test plan. A performance test plan contains executive summary that describe the objective of the test, strategy, transactions, purpose of the test, and the scope of the test that lists the application functions to be tested. The Organization and Responsibilities section describe roles and responsibilities of individuals. The Production Environment Description section contains the architecture and hardware specification, and expected increase in load volumes for long term and short term. Test Environment Description section contains environment architectures for all environments used and describes server hardware specifications. The plan describes assumption, constraints, risks, and contingencies. The Test Approach section describes details on methodology, and description of each test steps, and execution. The performance test plan also lists Monitoring Criteria with the tools to be used, and the environment elements to be monitored. The plan describes test entry and exit criteria, pass/fail criteria, application response time, hardware utilization, network utilization criteria. The Test Execution section contains script description with run time settings. The Project Schedule section lists the major milestone dates for the project. The Test Cycle sections describes the execution cycles and specify the number of Vusers for Baseline, Stress, and endurance testing cycles and the goal of each cycles. The plan also contains acronyms and definitions, deliverables, and appendices.
I have also developed Requirement Analysis Report (RAR), Test Analysis report (TAR), Level of Efforts (LOE – how many testers you need and how many hours). I developed test cases from requirement documents as well as from use cases.
Many of my projects I was involved in all stages of software development life cycle and have deeper understanding of the development processes in different Methodologies, such as waterfall, iterative, and agile methodology. My educational background of bachelor in computer science and working experience as a software developer is a great plus on understanding SDLC. I worked as a software developer and have in depth knowledge of programming with J2EE, C++, RDBMS and SQL. I worked with different operating systems such as UNIX and LINUX.
I am very capable of leading a team, and handle complex situation. One project I was leading 4 people, and one person quit close to the deadline. I had to work hard to cover that, and motivate the team. There are situation when new requirements start coming. I overcame those complex situations, and will be able to contribute a lot for your organization.
· Inserting checkpoints into the test
· Parameterizing tests
· Creating Output Values
· Using Regular expression
· Creating multiple actions, and reused functions
· Debugging Scripts
· Synchronizing tests
· Using managing Recovery Scenarios, and Optional steps
· Creating virtual objects
· Configuring Smart Identification
· Working with Descriptive Programming
· Working with User defined Functions
· Integrating QTP with Quality Center
After Script enhancement, I was involved running the scripts and analyzing the results.
After Regression testing, I was involved in Performance testing using LoadRunner that tests load on applications by emulating an environment in which multiple users work concurrently. LoadRunner replaces real users with Virtual users. I was involved picking up the business processes that we will do performance testing on. The performance testing candidates were picked for the following business processes:
Mission critical
Heavy throughput and
Dynamic content
As soon as the business processes are picked, I used the standard performance testing life cycle, which follows the following:
Gathering requirements
Creating test plan
Once the test environment is available then
Validate the test environment
Start scripting
Script shakeout (Enhancement)
Inserting transaction points
Inserting Comments
Creating Checkpoints
Rendezvous points
Parameterizing the Scripts
Configuring Run time Settings
Sending Output Message
Correlating the scripts
Creating Custom Vuser Scripts
Test execution {Baseline and Stress}
Create scenarios using the LR Controller: both Manual and Goal oriented Scenarios.
Establish Monitor s to monitor the test
analyze the test
Hits/ sec
Throughput
Closeout Documents
I have experience in writing test plan, and test cases for both functional testing, and performance testing. I was involved in writing The functional test plans using the IEEE standards format that contains:
Test plan identifier
References
Introduction
Test Items
Software Risk issues
Features to be tested
Features not to be tested
Approach / Test Strategy
Item Pass/Fail Criteria
Suspension Criteria and Resumption Requirements
Test Deliverables
Remaining test tasks
Environmental needs
Staffing and training needs
Responsibilities
Schedule
Planning risks and contingencies
Approvals
Glossary
I was also involved writing performance test plan. A performance test plan contains executive summary that describe the objective of the test, strategy, transactions, purpose of the test, and the scope of the test that lists the application functions to be tested. The Organization and Responsibilities section describe roles and responsibilities of individuals. The Production Environment Description section contains the architecture and hardware specification, and expected increase in load volumes for long term and short term. Test Environment Description section contains environment architectures for all environments used and describes server hardware specifications. The plan describes assumption, constraints, risks, and contingencies. The Test Approach section describes details on methodology, and description of each test steps, and execution. The performance test plan also lists Monitoring Criteria with the tools to be used, and the environment elements to be monitored. The plan describes test entry and exit criteria, pass/fail criteria, application response time, hardware utilization, network utilization criteria. The Test Execution section contains script description with run time settings. The Project Schedule section lists the major milestone dates for the project. The Test Cycle sections describes the execution cycles and specify the number of Vusers for Baseline, Stress, and endurance testing cycles and the goal of each cycles. The plan also contains acronyms and definitions, deliverables, and appendices.
I have also developed Requirement Analysis Report (RAR), Test Analysis report (TAR), Level of Efforts (LOE – how many testers you need and how many hours). I developed test cases from requirement documents as well as from use cases.
Many of my projects I was involved in all stages of software development life cycle and have deeper understanding of the development processes in different Methodologies, such as waterfall, iterative, and agile methodology. My educational background of bachelor in computer science and working experience as a software developer is a great plus on understanding SDLC. I worked as a software developer and have in depth knowledge of programming with J2EE, C++, RDBMS and SQL. I worked with different operating systems such as UNIX and LINUX.
I am very capable of leading a team, and handle complex situation. One project I was leading 4 people, and one person quit close to the deadline. I had to work hard to cover that, and motivate the team. There are situation when new requirements start coming. I overcame those complex situations, and will be able to contribute a lot for your organization.
No comments:
Post a Comment