Introduction

 Driven by the financial service industry’s soaring demand for improved data management solutions, the big data and analytics markets are enjoying an unprecedented boom. In response to a number of regulatory requirements and the ever-present need to stay ahead of the competition, firms are reserving a greater share of their IT budgets for big data technologies. As a result, the International Data Corporation (IDC) has predicted that this market can expect an annual growth of 26.4%, leading to market value of $41.5 billion through 2018[1]. In response to these challenges, IT service providers are competing to design and implement solutions to revolutionise data management, a consideration many businesses have overlooked in the past.

What is driving the change in position?

 The Financial Crisis of 2008 made very clear the systemic data management issues within the banking and financial services sector. As a result, governments worldwide have enacted legislation aimed at enhancing the availability and quality of data. The Markets in Financial Instruments Directive II (MiFID II) is one such example. The cornerstone of the European Union’s regulatory regime for financial services, the directive includes provisions which lay the foundation for improved data management.

Concurrently, the Dodd-Frank Wall Street Reform and Consumer Protection Act (DFA) and European Market Infrastructure Regulation (EMIR), both introduce harmonisation rules which aim to boost the monitoring and quality of financial data by requiring unprecedented levels of detail for derivative transactions. It is clear that regulators have shifted their focus toward the transparency and auditability of data so as to better understand and monitor risk.

Transparency of Financial Data

Many of the requirements imposed on participants in the derivatives market since 2008 have been intended to better enable regulators to monitor systematic risk and market abuse. In the case of MiFID II, this takes the form of pre-trade and post-trade transparency. However, MiFID II presents market participants with some key challenges in terms of data management, including data aggregation, transaction reporting to avoid market abuse, traceability and auditability of financial information, and trade transparency obligations.

As the emphasis here is clearly placed upon data accuracy and the speed of data delivery, we have seen current data integration processes, such as ETL (extract, transform, load) and EII (Enterprise Information Integration), used as scalable approaches in acquiring and moving large volume of data from one database to another. While these ensure a high-level of traceability, they have limited efficiency in real-time or on-demand data access.

A solution-based approach

Although transparency and performance management processes have improved over the years, insufficient data management tools have left decision makers exposed to slower and fragmented processes, imperfections in data quality and incomplete audit trails. Technology efforts must focus on warehousing data in a manner that is easy to access and audit. But, which is the correct approach for market participants?

  1. Hybrid solutions: platforms that both handle and process data, implementing a combination of both data warehouse and ETL technologies with the aim of overcoming the limitations associated with both of these systems. The advantage of these platforms is that users can access data easily, as it is forced into a standard structure and the underlying data is re-structured before migrating into the data workhouse maintaining a complete audit-trail. Transparency is guaranteed through the integration of a SQL or XQuery system, provided to data sources, which will allow the tracking of information in real-time, reducing the opportunity of losses and securing first-mover advantages.
  1. Re-modelling of Data Models: analytics solutions developed to allow market participants to make better and less risky decisions. Traditional RDBMS (Relational Database Modelling Systems) are not capable of handling the volume of data currently required. As a consequence, there is a move toward specialised databases customised for a specific group of consumers. Microsoft SQL, which was the standard for querying data out of RDBMS systems, is now being replaced by a new programming model that provides more efficient computations on distributed data. Using an optimised processing engine to extract big datasets hosted on clusters, these systems present improved scalability and data handling in respect of traditional database and provide a useful set of tools that developers can utilise to increase functionality and traceability.
  1. Cloud Storage: a service model that allows the user to maintain and manage data which is stored in logical pools. Although these “clouds” provide flexibility, accessibility and cost reductions they also have been linked to poor data security. In order to verify the integrity of the dynamic data stored in the cloud, a cloud server must demonstrate to an inspector, the third party auditor (TPA), that it is not amending the data stored, and verify the integrity and security of the data stored.

Why Change?

Evolving regulation is pushing for more transparent management of financial data and, in order to meet these challenges, firms are investing heavily in new and more efficient data technologies. Although participants face challenges of data security, real-time data access and incomplete audit trails, the manner in which organisations provide traceability and auditability of financial information strongly impacts a firm’s performance, which in turn creates opportunities to boost client confidence and gain a competitive edge. Market participants embracing change must decide which solution matches their individual requirements, budget and brings them a sufficient level of comfort to operate in an advancing technological world.

[1]  https://www.idc.com/prodserv/4Pillars/bigdata