345 worked with a major UK financial institution to help them with a complex, automated business process. Over a number of years the Client had been very successful in growing their business. The load on some of the Client’s computer systems was increasing in line with the number of customers, to the point where they needed to invest further in Performance Engineering.
- Complex systems – each transaction involves calling up to 10 different systems
- Strict performance SLAs – response times and throughput need to be met to comply with industry standards
The purpose of an overload test is to introduce gradual - but controlled - increases in load until the system breaks down
What We Did
Working with the Client’s technical team, a 345 performance expert followed our performance engineering principles:
Firstly, we worked to establish a performance benchmark. This is a test, or series of tests, that can be run against different system configurations. A performance benchmark must meet several criteria to be useful:
- Realistic – the test must simulate realistic load patterns in order to identify issues in the correct places.
- Repeatable – the test must get the same results when repeated against the same system and configuration.
- Stretching – the test must place enough load on the system to show some level of stress.
The next step was to design a series of progressive overload tests. Progressive overload tests are another tool in the armoury, and they differ from performance benchmarks. The purpose of an overload test is to introduce gradual – but controlled – increases in load until the system breaks down. By doing this we can test the limits of the system and understand where the hottest of the performance hotspots are.
There need not be a single overload test: depending on the situation we can devise a series of tests modelling different scenarios. In this exercise we tested a single business process, gradually ramping up the rate of requests the system was asked to deal with.
Diagnostics and Data
The tests were designed to highlight the parts of the system where performance issues would be exposed. We followed our practices and looked for indicators in the following areas:
- The Four Horsemen
- Disk IO
- The Big Cs
Tuning and Retesting
Once we had the test data and were able to identify the parts of the system we were able to recommend changes to system configuration and tuning. We ran these through the benchmark and overload tests to examine how each of the changes performed and noted improvements and differences in behaviour.
An important thing here: you must adopt a scientific approach. Adopt scientific principles. Change one thing at a time, test, evaluate. Large software systems are complex and it is possible to tune and change hundreds of settings. We were methodical so that we could isolate the benefit of each change.
The final stage was to bring the learning together and write up the results so that they had maximum value for the Client:
- Technical changes – specify the exact technical changes that the Client’s technical teams needed to implement in future releases to achieve the performance gains discovered under test.
- Knowledge capture – write up the method and the results so that 345’s work contributed to the overall pool of knowledge within the Client’s technical team.
Adopt scientific principles. Change one thing at a time, test, evaluate.
The customer was able to increase their throughput of their automated, complex business process from 60 transactions per second to 140 transactions per second, an increase in throughput of 133%. The recommended hardware changes are expected to achieve in excess of 200 transactions per second, meaning the client has a roadmap to meet future business growth.