One of the biggest problems of the performance tester or any type of non-functional tester is that, as opposed to functional testing, there are very rarely a set of non-functional criteria available to test against. As with any test, you need some sort of metric or result that confirms what a passed test looks like and what a failed test looks like, so where do you go for that type of information ?
Usually, Service Level Agreements (as known as SLAs) and Non-Functional Requirements (as known as NFRs) both can be used to provide criteria to assess performance testing, but they are different, so it is worth knowing the difference and how both can be applied to validate performance testing.
SERVICE LEVEL AGREEMENTS
On one side, we have Service Level Agreements. An SLA is a formal agreement between a service provider, such as a software provider and a customer which outlines to the customer the level of service that outlines the level of service the provider agrees to deliver.
SLAs are typically focused on defining the acceptable performance criteria (expressed in metrics such as response time and/or system availability) for a service or application, and as such provides an acceptable indication to the customer of how the system should perform. The SLA constitutes a measurable requirement that can be tested and marked as passed or failed accordingly.
SLA’s usually take two main forms.
a). System Availability - A typical SLA here could be ‘the application is required to be available 23.5 hours per day every day except on Sundays.'
b). Response time - A typical SLA here could be 'all user transactions must respond to the user within 2 seconds.'
Performance testers generally look towards the response time type of SLA, as the system availability SLA cannot easily be tested.
SLAs are defined from a business perspective and are usually contractual. They are meant to ensure that the service or application meets the customer's expectations in terms of performance, and failure to do so could incur penalties, compensation, or other measures. SLAs are also often set based on the end-user experience and the expectations of the customers or users of the service, and as such may be different from customer to customer.
On the other side, we have Non-Functional Requirements. NFRs are the characteristics or attributes that define how a system is required to behave, rather than relating to any specific functionality in the system. As such, usually non-functional requirements are not as easily derived as functional requirements and need careful consideration.
NFRs provide technical and design guidelines for achieving desired performance characteristics, which include aspects such as scalability, reliability, CPU and memory utilization, response time, concurrency, and more. They also guide the technical implementation of performance-related aspects, for example an NFR might specify that the system should handle a certain number of concurrent users without exceeding a certain response time threshold or the allowable threshold of CPU and memory utilisation.
In summary, compared to SLA’s which are more customer-focused, NFRs are often more technically focused and guide the development and testing teams into designing and creating an application that meets the desired performance characteristics. They help developers and testers understand the performance expectations of the system and design it accordingly, without the contractual implications of SLAs.
The truth is that both SLAs and NFRs play crucial roles in performance testing, helping to ensure that the tested system meets the desired performance standards and user expectations.