Big Data in financial services: from 5-minute transaction times to less than 1 second

A major household name in financial services approached us for help in building a Big Data platform. This is how we turned 5-minute search times into sub 1-second ones.

What was the problem?

The client’s existing platform was struggling with the weight of transactions that needed to be searched.  They need to store the history of billions of payments, built up over a number of years.  This had caused significant slow-downs over the course of many months and years.

Timeouts meant that Customer Services were unclear about whether payments had completed, when handling customer enquiries. They often needed to contact technical support for manual handling of what ought to have been quick and simple self-service tasks.

Some searches were taking as long as 5 minutes each to complete. The situation was unsustainable, and the client got in touch to request that we help them build a system that could scale much better.

So, the client’s problem was to significantly reduce search times so that they could eliminate timeouts and improve the experience for customers (as well as lessening the strain on the support team).

What did we do?

The client approached us with an existing system that was clearly no longer fit for purpose. Our assessment suggested that a Big Data solution would achieve their goal of significantly reducing transaction times.

Let’s explain a little bit behind our reasoning here.

Big Data means being able to efficiently and quickly handle, store and process large quantities of data. In banks, for example, the regulations state that you need to be able to hold 7 years’ worth of data on the payments you’ve sent and received. In large financial institutions, that’s a lot of data.

And gone are the days when banks had simple transactions such as paying out cash to a person once a week, where the bank would then have no need to be involved in the subsequent activity of the receiver as they went about their business and spent that cash.

Today’s world is one of mobile and contactless payments, with many small transactions making a typical user’s payment profile. More transactions means more information that must be recorded and processed. And that’s where the technical burden falls on financial institutions to create and update systems that can cope with the ever-growing set of data.

To get a sense of the challenge ahead of us, check out these rough example numbers:

  • Average number of payments per day: 10 million (some days are busier than others)
  • Number of data points per payment: 10
  • Years to retain data: 7
  • Total data points: 255 billion


Our new system for solving the client’s problem would need to accommodate the following requirements:

  • Average data per payment: 10 KB
  • Peak messaging rate: 10,000 per second
  • Total data store size: > 200 TB (terabytes!)


We worked with the client’s technical teams to evaluate and choose the right technology.  What we were looking for was a combination of messaging and storage. 

The messaging solution needed to be able to handle high peak loads and to free up the source systems as quickly as possible. For this we proposed Apache Kafka.

The storage needed to be scalable and resilient, able to continue operating across data centres and tolerate the loss of any individual computer.

The size and nature of the data indicated that a NoSQL solution would be most appropriate. We considered Cassandra, based on the write speed but, taking into account requirements such as partial updates and repeatable reads, we eventually settled on using MongoDB.

Big Data for financial services


The challenges we faced along the way were:

  • Interoperability of Windows and Linux, especially passing security credentials.
  • Running a hybrid solution – on-premises systems of record and the data platform in the cloud. And managing the infrastructure, security and latency issues around this.
  • Regulatory impact of putting sensitive data in the cloud, and having the appropriate controls in place.
  • Helping the client put in place the appropriate DevOps processes to deliver the infrastructure and application from code.


In the end, the actual data platform was the easiest part! Cloud technology makes the storage of large quantities of data highly practicable. The hardest part was integrating this with the legacy environment of a large organisation.


We needed to test the solution with representative data. We were glad to be able to use historical data that the client had already amassed in other systems. 

We worked with the client to extract the data from the existing repositories, reshape it into the right format for the application and feed the data into the messaging system.

By testing in this way we were able to load 7 years’ worth of data into the data platform in only 2 weeks. This gave us the confidence to sign off the design as fit for purpose and leave the client’s technical teams to implement the final rollout.

What was the result?

We delivered a Big Data solution on the NoSQL technology MongoDB Atlas that was scalable to meet the client’s needs.

From an old system that would frequently experience timeouts due to transactions that could take 5 MINUTES, we were able to provide a system that could handle transactions in LESS THAN 1 SECOND.

Now, that’s fast! The bottlenecks and frustrations caused by slow transactions were gone, and the client was delighted with the scalable solution delivered.


Do you need help with bringing your old systems up to date? We’re able to analyse your setup, maximise your operating efficiency and allow you to scale for the future.

Get in touch for a call

let's see how we can help

Share this post