August 4, 2020
PERFORMANCE TESTING WITH SSO / OAUTH

Prior to Single Sign On (SSO) and Open Authorisation (OAuth) our main challenge as performance test specialists was getting sufficient user data (login details, etc) to reach the volume of concurrency that the test required. How we long for such simple problems!

There are many, many, papers that detail how the sign in process (SSO & OAuth) flow and verify. These papers are fantastic for developers who create and have access to ‘the secrets.’ However, as a performance tester you will only have access to the information that the web server provides you -, with SSO & OAuth this is frequently insufficient to create the required tokens or stsRequest. In reflection, this is a good thing - if we could create these tokens & stsRequest by pure correlation the site would not be particularly secure.

So what have we done to tackle SSO and OAuth?

Like most, we are constantly evolving; before OAuth (SSO) and sophisticated video conferencing became mainstream we would create scripts at protocol level – job done. Today, just using protocol level scripts has become increasingly challenging (OAuth secrets and tokens) and can be ineffective (video conferencing without opening, configuring and connecting the sockets is like loading a browser with a few empty frames).  A combination of performance and functional test tools is the answer.

The problem is performance and functional test tools are developed for distinctly different purposes. As such, plugging one into the other will result in a compromise of one’s, the other’s or both tools’ features. If you are, say, plugging an NUnit Selenium script into MicroFocus’s Silk Performer it’s a square peg into a round hole. As Selenium WebDriver is not a tool but a collection of open source APIs, it could be considered amorphous; it will therefore perfectly fit the round hole of Silk Performer as it fits the square peg of NUnit.

Testing Performance has developed a technique that allows the direct use of Selenium WebDriver within both MicroFocus’s LoadRunner and Silk Performer IDEs; no integrations or extra tooling such as JUnit, NUnit or Eclipse, just a few class path references.  Because scripts can be created and maintained within the tool’s IDE there are no restrictions on the granularity of transaction timers that can be created; pacing/ think times are also managed, with test data managed by the tool as normal. As you would expect, transaction response times are correlated to the server and network monitored statistics are collated during test executions whilst the tool’s reporting and detailed analysis capabilities remain intact.

 We have also recently developed the capability for mixed mode scripting: Our customer was unable to prove the secret credentials for OAuth on Azure Active Directory. A single script used WebDriver to open a headless Chrome to complete login (OAuth), then all of the required session and cookie data were captured and used at the protocol level for the rest of the script. This approach can and should be extended to construct a more ‘real life’ performance test. Before OAuth & SSO a user would have to sign-in to the Web App every time they accessed it. SSO is intended to reduce this to a minimum, thus a performance test where each user is repeatedly signing in is no longer realistic.

So how can the simulation be improved to be more ‘real life’?

If the user has not been signed in (last 24 hours):

  • Use the WebDriver to perform the initial sign in.
  • Switch to protocol mode to complete the script.
  • Save all of the session and cookie data that is required.

If the user has been signed in (last 24 hours):

  • Retrieve the session and cookie data.
  • Run in protocol mode to complete the script.
  • Save any updated session and cookie data.

We are excited to have cracked the very difficult problem of realistic performance testing an application that uses SSO and OAuth. If you are struggling with performance testing with SSO or OAuth, then please contact us.

Posted on:

August 4, 2020

in

Performance testing

category.

Is there a project You'd like to discuss?

related insights

Artificial Intelligence (AI) and Machine Learning (ML) in Performance Testing

The Differences between Usability and Accessibility Testing

Why Incorporate Non-Functional Testing Early in the Software Development Cycle ?

Benefits / Drawbacks of Performance Testing in Test / Scaled Down Environments

Incorporating Performance Testing within CI/CD Pipelines

How to Obtain Stakeholder Buy-In for Non-Functional Testing

Troubleshooting Performance Issues in Test Environments: A Real-World Scenario

Demystifying Database Tuning - Unraveling Database Performance

‍Functional Test Automation: Why companies often feel let down by the outcome of their investment

The OWASP Top Ten - The Top 10 Web Application Security Risks

Avoiding Artificial Bottlenecks / Performance Issues in Performance Testing

Accessibility Guidelines - Understanding WCAG 2.1, the Upcoming WCAG 2.2 and Future WCAG 3.0 Updates

What is Volumetric Analysis ?

The Performance Testing Cycle Explained

Service Level Agreements vs. Non-Functional Requirements for Performance Testing

Applying Automated Test Solutions

Combining Performance Testing and Chaos Engineering

Non-Functional Testing Strategy for Performance

Explaining Penetration Testing

Explaining Performance Testing

Explaining Accessibility Testing

Silk Central Upgrade - "It's just a simple upgrade...."

Virtual Machine LoadRunner Load Generators on Azure Setup

How Selenium WebDriver can be used for Performance Testing

16 Tips Before You Automate

What is Automated Software Testing?

Load Testing and Performance Testing Tools

10 Top Tips for Automated Performance Scripts