Is Your Firm Ready for Blockchain-Based Trade Processing?

Is Your Firm Ready for Blockchain-Based Trade Processing?

Posted on December 05, 2016 0 Comments

Post-Trade Processing Today

A major center of cost and effort in the financial services industry is the handling of post-trade processing. Post-trade processing is a complex process. Some of the processes involved include:

  • Comparing each firm’s ledger of trades to determine what trades they have done with the other firm
  • Handling exceptions when the ledgers conflict
  • Providing definitions for non-standard instruments
  • Formalizing the change of ownership of the instruments
  • Processing payments
  • Providing valuations for some instruments
  • Dealing with effects of corporate actions (stock splits, etc.)
  • Correcting errors in the details of the trade
  • Verifying that the trade complies with legal and internal compliance rules
  • Determining the value of the instruments traded
  • Verifying margins and collateral are correctly handled

Within a single firm, it is common for multiple business units and independent systems to be involved with the processing of a single trade. It is often necessary to interact with multiple external firms beyond those of the counterparty. Errors are common given the complexity and speed of the modern trading environment. The post-trade processing environment understands this and is flexible and forgiving toward errors.

Straight-through processing (STP) is meant to allow trades to be processed as quickly as possible after trade execution. STP has only been possible to achieve in limited areas because of the complexity of post-trade processing.

New Kid on the Block: Blockchain

Today, STP is back in the news because of a new technology called “blockchain.”

Blockchain replaces the individual ledgers maintained by each firm and replaces them with a common, distributed, legally binding ledger. With blockchain, reconciliation of ledgers and its associated tasks — like exception handling — disappear because a single logical ledger means there is nothing to reconcile.

A common distributed ledger is a major enhancement that makes STP more achievable.

Advocates of this technology argue that blockchain simplifies and speeds up regulatory reporting because the data needed for the reports is centrally stored. Financial crime can therefore be reduced because ledger transactions cannot be modified, clearing and settlement will become instantaneous, and data accuracy will be increased, all while cost and risk will be reduced.

While all these benefits may be possible, there is the question of how do you go from here to there?

Trade-Processing With Blockchain

While the potential benefits of blockchain are real, the reasons for the existence of current trade-processing infrastructures are also real. It is certainly true that blockchain has the potential to reduce the work involved with settlement, clearing, avoidance/correction of errors and compliance. However, not all the work will go away. The post-trade work that remains will have to move to real-time.

Today’s trade-processing systems require:

  • BatchIng the data
  • Multiple, independent, systems working with their own copy of the data
  • Extensive ETL
  • Extensive manual intervention
  • Error correction

However, a trade-processing system built for blockchain:

  • Handles data in real-time
  • Accesses a common, distributed, master copy of the data
  • Uses minimal, real-time ETL
  • Is fully automated
  • Prevents/avoids errors

Comparing these lists, it is clear that many of today’s trade-processing systems are not ready for blockchain. Most importantly, the fundamental (serial batch) approach of many existing systems is not suitable for the new world.

Blockchain will not be adopted universally, all at once. Instruments and locations will adopt it at different rates and at different levels of completeness. Firms will need to support both blockchain and non-blockchain based trade-process for the indefinite future.

A New Trade-Processing Infrastructure

At a technical level, to accomplish the requirements listed above requires a new approach. Basically, there are a number of fundamental building blocks that, along with a sufficiently flexible infrastructure, can make it possible to build the new trade-processing infrastructure. These building blocks are:

  • Central data repository
  • ACID transactions
  • Ability to handle all data types used in trade-processing
  • Allow different users to access the same data using different views of it

These core capabilities offer a variety of enhancements to trade-processing:

Real-Time – Having multiple processes each handling a part of the trade-processing in a serial fashion with their own copy of the data makes it difficult to implement STP. Having each data element exist only once with all processes accessing that central copy is a much better approach. With a batch processing framework, data integrity is easy to maintain as only one process accesses a given data element at any one time. With a real-time system data integrity means transactions and ACID.

Handling Complexity – Trade-processing is done for instruments that are easy to model, like equities. Trade-processing is also done for instruments with complex underlying object models, like certain swaps. Complex instruments can be difficult to describe, especially in a relational schema. See “Object-Oriented Programming & NoSQL Databases” for a discussion of this.

Trade-processing needs to verify trades are consistent with the appropriate master agreements – information that exists in the form of legal documents and not relational tables. To fully implement real-time trading, it is necessary to have an infrastructure that is able to automate the handling of this kind of data.

Automation – In today’s trade-processing systems, critical functionality exists in products like Microsoft Excel. While blockchain might eventually sweep all this aside, the reason for Excel’s continued use is that it is sometimes easier to build and maintain complex and changing functionality in a spreadsheet than formally coding it into an application. When moving to blockchain, the probability of success is much greater if the new infrastructure can leverage as much of the existing infrastructure as possible and not require a complete rewrite of the entire process.

ETL – A key to moving to a blockchain-based trading system is moving away from extensive batch-oriented processing to real-time ETL. This is difficult to do in relational-based systems because a large amount of the ETL is translating data from its original format into the tables, rows, and columns that relational requires. This requires substantial development time to model and can be time-consuming at run time.

Another reason for the multiple, independent, ETL processes of today is that a given ETL process may require data to exist in a certain format. The process may need to transform the data it receives to its desired form – resulting in the need for multiple copies of the same underlying data. To move to real-time ETL, it is necessary that the database underlying trade-processing can provide different views of the same data to different users.

Error Avoidance/Correction – When a blockchain-based transaction is executed it is legally binding – the default approach for blockchain is you cannot change a transaction once it is completed. This is a tough new constraint. Any trade-processing infrastructure built for a blockchain based environment must have rigorous error detection and correction capabilities.

MarkLogic for Real-Time Trade-Processing

SQL?

– Many trade-processing systems have traditionally been built on a relational infrastructure. In a blockchain era this is problematic for two main reasons:

  1. The need to use time-consuming ETL to convert complex objects into tables, rows, and columns (and vice versa) is more suitable for batch than real-time processing.
  2. Relational has issues handling critical non-structured or semi-structured data.

The reality is that trade-processing is complex and hard and relational is not a good fit to handle it. When real-time trade-processing is required the disadvantages become overwhelming.

If not SQL, then what?

Consider MarkLogic – MarkLogic provides:

  1. Multi-model Database – Documents, semantic triples, geospatial, SQL. All stored together. Accessible in a tightly integrated fashion. For example, if operationalizing the data in master agreements is needed – which it is – technologies like semantic triples, coupled with other database capabilities become key.
  2. Minimized ETL – Avoid shredding data to force it to fit into relational tables.
  3. Support for Change – MarkLogic loads data as is with data enhanced and harmonized as needed. As data with varying schemas arrives it can be loaded with no effort and easily harmonized and enhanced. See “MarkLogic As a SQL Replacement” for a discussion on this.
  4. Alerting – MarkLogic alerting allows it to monitor and correct potential issues at data ingestion. See “Data Quality and NoSQL Databases,” critical functionality for real-time trade-processing.
  5. Different Views of the Same Data – MarkLogic allows data to be accessed as documents or as SQL. Multiple SQL views can be defined against the same underlying data set.
  6. Enterprise Security – MarkLogic has the best security of any NoSQL database.
  7. Data Integrity – MarkLogic is one of few NoSQL applications that provide ACID transactions – the backbone of data integrity.
  8. Reliability – Out-of-the-box automatic failover, replication, and backup/recovery.

The world of trade-processing is changing. In the future, it will move faster and demand higher levels of accuracy than ever before.

The trade-processing systems of today are not ready for the demands of tomorrow. Moving to this new world will require fundamental enhancements in the underlying trade-processing frameworks starting with the ability to handle diverse, constantly evolving, data sets.

David Kaaret

David Kaaret has worked with major investment banks, mutual funds, and online brokerages for over 15 years in technical and sales roles.

He has helped clients design and build high performance and cutting edge database systems and provided guidance on issues including performance, optimal schema design, security, failover, messaging, and master data management.

Comments

Comments are disabled in preview mode.
Topics

Sitefinity Training and Certification Now Available.

Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.

Learn More
Latest Stories
in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation