[Note: Amir Halfon is a frequent blogger at FinExtra, a site devoted to the financial technology community. Following is an excerpt from his blog post of January 4.]
In an earlier post, I wrote about the differences between NoSQL, what it is, what it isn’t, and some of the misconceptions surrounding it. Now I will write about what financial institutions can gain from NoSQL. First up: the Operational Trade Store.
Once a trade is made it needs to be processed by the back office, and reported to the regulators. Trade data is typically read off of the message bus connecting the trading systems, and persisted into a relational database, which becomes the system of record for post trade processing and compliance. The original data formats are either XML (FpML, FIXML) or text based (FIX), and have to be transformed into normalized relational representation. This may sound easy enough, but with a high rate of innovation in the front office, introducing very complex instruments quite often, the task of stuffing them into a relational store becomes harder and harder. And as a result, the back office takes a longer to respond to the needs of the business. This is compounded by the need to create and maintain a fully normalized,”canonical” schema before any new data can be ingested, which can become quite onerous, leading to a proliferation of multiple schemas and databases. Or worse yet, workarounds are put in place that allow shoving data into existing schemas (such as flags that indicate a record is of a different type than expected, or an empty shell into which any variable can be fitted).
These workarounds can create costly trade exceptions downstream, which need to be resolved manually, and the ensuing costs are compounded by the high maintenance costs of complex RDBS systems, leading to high costs per trade.
All of these ills can be addressed using Enterprise NoSQL, by persisting trade messages as-is, without the need for transforming them into a normalized relational schema. Trade messages contain their own structure, and there’s no need for an over-arching canonical data model in order to process them or report on them. Furthermore, this structure can be modified at the time of querying the data based on the actual usage, rather than trying to create a schema that will handle any foreseeable usage. This is an example of the notion of schema-on-read mentioned in earlier posts.
Next up: Reference Data
For more information
MiFID II – Solving the Data Challenge – White paper. Learn about deploying a trade store as a regulatory reporting solution architecture