August 9, 2023

The following insight gives an idea of the typical cycle of a performance test.

When conducting a performance test, each of the following stages should be carried out by a performance test specialist, with the assistance of others where indicated.

TEST SCRIPT DEVELOPMENT STAGE

Test script development typically occurs against the same interfaces that are used by real users in production, however there are exceptions to this when no direct user interface exists, for example, scripts that pass web service requests might use WSDL files to generate the scripts.

During this stage performance testers can generate / record tests scripts using the recording functionality of the performance test tool.  As only one or two user ids are required for this process scripting can occur on an environment that is not scaled for non-functional test execution.

These initial recorded scripts are developed with parameters to pass through multiple sets of user data such as user ids and passwords, and coded to ensure that errors are handled and reported. Scripts are developed to handle system and data related errors.

One of the most complicated tasks in writing performance test scripts is applying the correct correlation. When, for example, you are generating performance test scripts based on recorded HTTP traffic, you are effectively recording the traffic for one single user working with particular objects or records, and as such, values relating to that session will be hard coded in that recording.

The key to a good performance test script is to be able to simulate multiple users working on multiple objects and records, and that will require techniques like correlation to grab from responses and apply values such as individual session ids in subsequent requests for each user to maintain the integrity of each user thread launched.

In addition, each user will not be working with the same object or record, as that is unrealistic. The performance tester should find a way to have multiple objects and records being accessed by multiple users at one particular time. This could be something as simple as running a query in the application to return all records, analysing the query response and have the performance test tool pick out one at random to use for the user to work with in that individual user thread. Some tools, such as LoadRunner have correlation modules which can assist with this process, but many do not, so it is a skill that a performance tester needs to be able to master manually.

Transaction timing points are coded into the test scripts to measure response times for individual server requests. Scripts are also coded with user think times to ensure that the script, when executed, can simulate the pauses between server requests as incurred when a real user executes the business transaction.

Scripts are also coded so they can reiterate. During the test script development stage any test data is identified that allows for multiple users executing the scripted business transactions within a non-functional scenario. This data may need to be sourced from the application database via queries, generated externally, derived from system responses or randomly generated.

TEST SCENARIO DEVELOPMENT STAGE

During this stage the test scripts are grouped into test scenarios. These scenarios allow for the configuration of user load profiles.  The scenarios can be configured to simulate the user activity as defined by the volumetric analysis defined earlier (the definition of user traffic expected to be experience in production). A series of scenarios can be developed to reflect a range of user activities to test the system stability.

SYSTEM MONITORING STAGE

In order to correlate user activity with system stability and to identify points of system failure, the system servers are required to be monitored.

Where functionally possible the performance testing tool is configured to monitor the servers, although usually monitored statistics are taken from the server’s native resource monitoring utilities and then imported into the test tool for reporting after the test has concluded.

Access to system monitoring should therefore be provided to the performance testing team, so they can correlate the results from the system monitoring against the response time and error rate results returned by the performance test tool. This requirement will be identified in the non-functional test strategy as a deliverable from the project group to the performance test team.

TEST EXECUTION STAGE

During this stage, the test execution, test analysis and issue resolution phases form an iterative process, i.e. a period of testing is followed by a period of analysis, then issue resolution followed by more testing.

This approach is aimed at identifying and driving out the major issues first, then on to the next issue. Some applications perform perfectly first time and do not require this iterative process, although this is rare. It is important to keep a detailed test diary is kept to record all test execution activity, with the objective to identify and, where possible, drive fixes for any performance bottlenecks experienced.

During test execution communication is established by the non-functional testers, and key project technical resource are encouraged to participate in the analysis of the system during test execution, in order to expedite the issue identification and resolution process. These people might include:

•    DBA’s

•    Network administrators

•    Developers

•    Server administrators

TEST REPORTING STAGE

During the iterative test execution and tuning stages interim reports are generated and sent to the project group.

These reports are usually fairly ad-hoc in order to quickly summarise the latest test findings, issues and any blockers for the performance test team and report these to all stakeholders in the performance test process. This will allow for rapid cycles of testing and tuning whilst updating the project group on progress and issues.

FINAL TEST REPORT STAGE

At the point where test execution demonstrates a stable system, or a system that is stable as possible within the allotted performance testing window, a final report is generated based on the results of the final test execution. The final test report generally includes the following;

•    Management Summary providing high level overviews and key findings, without too much technical analysis.

•    Detailed technical results of the performance testing and tuning effort.

•    Detailed metrics against the required Service Level Agreements and performance requirements as detailed in the initial test strategy.

•    Issues identified and rectified.

•    Recommendations.

•    Conclusions and next steps.

The final report measures the results and testing activities against the approved non-functional testing strategy that will have been previously completed and approved. Any changes to the scope of testing agreed during the testing stage will be detailed in the final test report. The final test report measures the results against the initial non-functional testing strategy ensuring that the test results are measurable and auditable.

Posted on:

August 9, 2023

in

Performance testing

category.

Is there a project You'd like to discuss?

related insights

Artificial Intelligence (AI) and Machine Learning (ML) in Performance Testing

The Differences between Usability and Accessibility Testing

Why Incorporate Non-Functional Testing Early in the Software Development Cycle ?

Benefits / Drawbacks of Performance Testing in Test / Scaled Down Environments

Incorporating Performance Testing within CI/CD Pipelines

How to Obtain Stakeholder Buy-In for Non-Functional Testing

Troubleshooting Performance Issues in Test Environments: A Real-World Scenario

Demystifying Database Tuning - Unraveling Database Performance

‍Functional Test Automation: Why companies often feel let down by the outcome of their investment

The OWASP Top Ten - The Top 10 Web Application Security Risks

Avoiding Artificial Bottlenecks / Performance Issues in Performance Testing

Accessibility Guidelines - Understanding WCAG 2.1, the Upcoming WCAG 2.2 and Future WCAG 3.0 Updates

What is Volumetric Analysis ?

Service Level Agreements vs. Non-Functional Requirements for Performance Testing

Applying Automated Test Solutions

Combining Performance Testing and Chaos Engineering

Non-Functional Testing Strategy for Performance

Explaining Penetration Testing

Explaining Performance Testing

Explaining Accessibility Testing

Silk Central Upgrade - "It's just a simple upgrade...."

Virtual Machine LoadRunner Load Generators on Azure Setup

How Selenium WebDriver can be used for Performance Testing

Performance Testing with SSO, OAuth

16 Tips Before You Automate

What is Automated Software Testing?

Load Testing and Performance Testing Tools

10 Top Tips for Automated Performance Scripts