An article published in Big Data Zone by Vamsi Chemitiganti caught my eye today. In it, he catalogs and describes the various applications Cap Markets firms have for Big Data tools like Spark/Storm and Hadoop, including:
- Client Profitability Analysis/Customer 360 view
- Dodd Frank/Volcker Rule Reporting
- CCAR & DFast Reporting
- Risk ManagementLiquidity Management
- Intraday Credit Risk Management
- Intraday Market Risk Management
- Reducing Market Data Costs
- Trade Strategy Development & Backtesting
- Sentiment-based Trading
- Market & Trade Surveillance
This certainly jibes with our experience; we’re seeing widespread deployment of a new generation of Big Data technology in all of our customer sites. Along with Hadoop and Spark, we’re seeing especially strong interest in Apache Kafka to support these functions. As the author points out, FINRA is processing 40-75 billion (with a “b”) market events every day. It’s absolutely essential that trading firms properly equip themselves, given these massive numbers, combined with the need to comply with ever more granular transaction-based regulations like MiFID II.
Banks and other trading firms are also beginning to fully understand that value of data visualization in a Big Data context. It’s simply impossible for traders and compliance officers to make sense of all the data available to them without it. In particular, we’re seeing increasing use of Treemap visualizations to examine activity based on a single moment in time combined with a variety of visual analytics tools designed to facilitate exploration of time series data, down to the nanosecond level.
After all, what good is all this data without the ability to understand it?
Read the full text of the story here: https://dzone.com/articles/capital-markets-pivots-to-big-data-in-2016