October 3, 2023

For many reasons, performance testing on an environment smaller than the production equivalent is common, however can this affect the meaningfulness of the results gained in this way ?

First off, we need to establish the reasons why we might want (or have) to conduct performance testing on a scaled down environment if the first place.


Building and maintaining an environment equivalent to the production environment can be expensive. We may also ask whether it is worth procuring and building such an environment purely for a month or two of testing activity. Performance testing can be an expensive business, especially if you are selecting a tool that has a licensing cost, so a smaller scaled-down environment may be easier to get funding for and maintain.


Even having the budget and resource for a scaled-down performance test environment may not be possible, so in this case it is not unusual for performance testing to take place on an existing environment which is already in use by testers of developers.


Difficulties can arise when other activity is being performed on the environment at the same time as the performance test. What is this extra activity and can it skew the results I am getting from the performance test ? If I conduct the performance test at the same time as other testers are using the environment, am I liable to slow down or even crash the system whilst they are using it, disrupting their activities ?

A way of getting past this problem is scheduling performance tests to run out-of-hours if possible, however make sure there are no overnight batch jobs or application restarts which may interfere with and stop the performance test in its tracks. If you decide this is the way to go, then it is essential to have comprehensive monitoring whilst the test is running, as it is possible that you will need to review the test the next morning. You will want to be able to match up notable events in the performance test against the corresponding server-side metrics and activities, so make sure monitoring information and log files are retained long enough to let you do this !


As part of the test preparation process, we are likely to perform a volumetric analysis which seeks to give a definition of the number of users we expect to see on production and the amount and kind of traffic, so as to design a scenario and run a performance test equivalent to production traffic.

With a scaled-down performance test environment we may want to do one of two things, a). get an understanding of the differences in scale between the testing and production environments and factor down the traffic identified in the volumetric analysis appropriately to take into account the scaled-down test environment, or b). run the entire set of production traffic identified in the volumetric analysis against the scaled-down environment. If the scaled-down environment can handle production traffic, then this is a good indication that the production environment with be about to handle production load.


Even using with a scaled-down performance test environment or an existing test environment, useful bottlenecks and issues can still be identified, however they need to be treated with care as there may be differences between the test and production environments which are not immediately apparent.

Subtle differences in configuration (such as database configurations, the use or lack of load balancers in test, memory allocation, database connection pool sizes) may exist between the test environment and the production environment, which need to be fully appreciated before interpreting any results in order to understand if issues are solely down to configuration of the test environment and would not likely manifest in production.


Testing on scaled-down environment or test environment is a valuable exercise no question, as long as the results are carefully assessed to ensure that issue are genuine performance issues and not just limitations of testing on any scaled-down environment.

Posted on:

October 3, 2023


Performance testing


Is there a project You'd like to discuss?

related insights

Artificial Intelligence (AI) and Machine Learning (ML) in Performance Testing

The Differences between Usability and Accessibility Testing

Why Incorporate Non-Functional Testing Early in the Software Development Cycle ?

Incorporating Performance Testing within CI/CD Pipelines

How to Obtain Stakeholder Buy-In for Non-Functional Testing

Troubleshooting Performance Issues in Test Environments: A Real-World Scenario

Demystifying Database Tuning - Unraveling Database Performance

‍Functional Test Automation: Why companies often feel let down by the outcome of their investment

The OWASP Top Ten - The Top 10 Web Application Security Risks

Avoiding Artificial Bottlenecks / Performance Issues in Performance Testing

Accessibility Guidelines - Understanding WCAG 2.1, the Upcoming WCAG 2.2 and Future WCAG 3.0 Updates

What is Volumetric Analysis ?

The Performance Testing Cycle Explained

Service Level Agreements vs. Non-Functional Requirements for Performance Testing

Applying Automated Test Solutions

Combining Performance Testing and Chaos Engineering

Non-Functional Testing Strategy for Performance

Explaining Penetration Testing

Explaining Performance Testing

Explaining Accessibility Testing

Silk Central Upgrade - "It's just a simple upgrade...."

Virtual Machine LoadRunner Load Generators on Azure Setup

How Selenium WebDriver can be used for Performance Testing

Performance Testing with SSO, OAuth

16 Tips Before You Automate

What is Automated Software Testing?

Load Testing and Performance Testing Tools

10 Top Tips for Automated Performance Scripts