INDUSTRY

Financial Banking

PROJECT

The Transformation of Legacy Applications. Implementing process, approach and tools to a new organisation where NFT had not previously been carried out.

SYNOPSIS

Testing Performance / Fimatix - Performance Testing Case Study

THE BACKGROUND

The customer was a relatively new entity having split from one of the UK’s largest banks for compliance reasons. The customer was obliged to pay a license fee to continue using legacy applications and wanted to develop its own newer and better-performing applications that would support them now and into the future.

THE CHALLENGES

Performance testing had not previously been carried out but this was now seen as essential, not only to prove the concept of some transformation components but also to ensure performance prior to release into production.

Non-functional requirements did not exist, and although volumetric workload data existed it was not coherently organised. It did not readily map to the structure or organisation of the new application platform, and as such, was therefore largely unusable. While system design was available, this was out of date as the design changed faster than documentation could be maintained. New technology, especially around storage of large volumes of data constituted a high risk for performance as well as challenges around how to test. Tools and processes were not in place, and with tight deadlines to delivery, it would prove difficult to deliver these whilst moving forward with performance validation.

THE SOLUTION

A Performance Test Lead was deployed for a period of 5 weeks in a Discovery Phase to define the scope, approach and deliverables for the Design Phase of Performance Testing. This included a review of the 12 key applications that were being delivered over the next 15 months, discussions and interviews with key project personnel and assessment of dates and timescales with reference to the delivery model that would be used.

The activities centred around the following:

•  Understanding the architecture and design of each component, component interfaces and how each component communicated with preceding and following components.

•  Obtaining, analysing, organising and documenting workload volumetric information looking at peak processing times.

•  Planning the build and execution phases including monitoring backend components where no user interface existed, generating component level performance test plans.

•  Evaluated performance testing tools, notably Green Hat that was used to stub environments and OATS for front-end and messaging based performance testing.

•  Creating large volumes of functionally accurate test data that could be used to drive performance tests and populate databases in test environments.

THE OUTCOME

Huge amounts of data and information were processed to form the basis of performance testing requirements, approach and scope. The volumetric workload model was fully documented and easily maintainable, accounting for current workload volumes and for observing developing trends. This analysis and planning underpinned the performance testing for the transformation project allowing the customer to confidently predict expected performance in production.

THIS EXAMPLE DEMONSTRATES

•  Testing Performance’s ability to take raw data, architecture diagrams and system designs to build a coherent approach to performance testing.

•  Plan and deliver performance testing at a component level early in the development lifecycle before component integration had occurred.

•  Ability to structure and plan performance testing allowing for clearly defined milestones and deliverables.