Intraday Risk and Portfolio Selection

At this point in financial innovation, no savvy portfolio manager can afford to ignore intraday risk, and, instead, needs to make it an integral part of his portfolio selection model.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Co-authored with Steven Krawciw

You feel it, you know it: some stocks tend to have more intraday volatility than other stocks. Some stocks are specifically more prone to Flash Crashes than others. Some stocks have higher aggressive high-frequency trading (HFT) participation than other stocks. At this point in financial innovation, no savvy portfolio manager can afford to ignore intraday risk, and, instead, needs to make it an integral part of his portfolio selection model.

Why do intraday dynamics need to enter portfolio selection models? Can't portfolio managers simply ride out the intraday ups and downs in their pursuit of longer-term goals? The answer, yes, but at a considerable cost. As the latest AbleMarkets.com research indicates, aggressive HFT participation and flash crashes are "sticky" and change slowly from one month to the next, not to mention from one day to another. Understanding which securities are prone to flash crashes can help avoid unnecessary stop losses. Avoiding financial instruments with high aggressive HFT participation can save double-digit percentage costs in execution.

Why do metrics like aggressive HFT and flash crash probabilities persist in given stocks? The answer can be found in modern market microstructure. The microstructure phenomena such as aggressive HFT participation are directly linked to the automation of financial markets. As trading becomes increasingly electronic, many financial market participants build proprietary computer programs to obtain a cutting edge in the markets. The programs are time-consuming and costly to build, and successive iterations take months and even years. As a result, the intraday dynamics remain stable over long time horizons and may differ significantly from one security to the next.

Why would market participants choose to build and run programs for some financial instruments, but not others? The answer has three parts:

1)Cost of historical data
2)Cost of interpreting historical data
3)Processing power restrictions

Building a profitable trading algorithm requires a considerable investment in highly granular, and, as a result, voluminous data. Data can be very expensive to buy and also to store. Major data companies sell historical data for tens and hundreds of thousands dollars. Just one day of highly granular data containing individual orders from a major exchange takes up 5 GB of space - the data storage space that can hold more than 1,000 high-resolution digital photos. Given the data costs, trading developers may choose to acquire data for only a selected set of securities, paving the way to persistent discrepancies in microstructure among various financial instruments.

Making sense of data is perhaps the most expensive part of the process. New data-based ideas are hard to come by, and capable data scientists are in high demand and command premium compensation. Even the seemingly basic tasks of retrieving the data and structuring it in a computer database can be an expensive process requiring numerous hours of computer programming. The disparity of data standards among various data providers and financial products make the basic tasks of reading file names very complex.

Even when the data is acquired, properly stored, opened and turned into successful models, the majority of data-related costs may still lie ahead. To ascertain the model performance, the model needs to run on at least one year (most often, two years) of historical data, requiring advanced computer power. Once the backtests are completed and the models are verified to work (80% of models will fail), the costs are just about to ramp up. In trading "production" environment, the vast volumes of streaming data need to be received, captured, processed, stored and turned into trading communication. To capture the full range of data, one needs to invest not just into advanced computer processing power, or hardware, but also into advanced network architecture and data centers, as well as physical network equipment such as fast network switches and physical communication like microwave networks.

As a result, once a working electronic trading approach has been developed and deployed, it can be extremely costly to change. As long as the systems remain profitable, they are typically left online. This, in turn, leads to great persistence in market microstructure in each individual financial instrument. The microstructure dynamics, however, may differ considerably from one financial instrument to the next.

For example, as AbleMarkets.com research shows, in the equities space, Google Inc. Class A stock (NASFAQ:GOOGL) had the highest participation of aggressive HFTs among all the S&P 500 stocks in 2014. Switching investment activity from GOOGL to GOOG, a Class C shares of the same company, lowers aggressive HFT participation) by 1% of volume, reducing trading costs. Switching investments further to a different company with a similar risk-return profile can further reduce execution costs, thus significantly improving portfolio returns. Similarly, avoiding securities with high flash crash risk can deliver considerable performance improvement. Understanding market microstructure risks is no longer a matter of curiosity, but that of sound portfolio management.

Steve Krawciw is CEO of AbleMarkets.com. Irene Aldridge is Managing Director of AbleMarkets.com and author of High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems (2nd edition, Wiley, 2013).

Popular in the Community

Close

What's Hot