The Flash Crash occurred at roughly 2:45 PM on May 6th, 2010. The New York Stock Exchange plunged hundreds of points, and within minutes had recovered most of its value. This was and is continually referenced by people who believe that HFT is in some way damaging the markets-at-large, that without algorithms and computers the value would never have dropped so quickly, and that it represents all that is wrong in modern trading. I have a different perspective. I spent about eight years in the HFT industry, and am intimately familiar with all of the nitty-gritty details of modern exchange technology, market making algorithms, low latency topologies, and system level design and architecture. That makes me somewhat of an insider, and not unbiased. I was sitting feet away from a trading desk when the flash crash occurred, I’m writing this to set the record straight.
Before I do, lets put some ground rules in place. There are some simple concepts that need to be understood in order to talk about modern trading systems. I’m going to run through them briefly so that the astute reader has the necessary context. I’ll try not to be too technical, and I’m going to gloss over some of the long-winded details, brevity is after all the soul of wit.
- First, the value of a stock (and anything really) is determined by what someone is willing to pay for it at any given time in any given market. There is no intrinsic worth. Right now Google (GOOG) is trading at $597.11 on NASDAQ, because someone was willing to purchase a share for $597.11. This seems simple enough, but is crucial to understanding what happened on May 6th. `
Second, most modern markets are built around a two sided order book. An order book lists prices and quantities that people are willing to sell at, and prices and quantities that people are willing to buy at. At any given time there may be a significant number of orders that are below or above the current price in the market. In other words right now there are 100 shares available at $597.11, 200 shares at $599, etc etc. The order book has depth, which is not immediately obvious.
Third, nobody is forced to buy or sell anything at a price they didn’t agree to (with some esoteric and irrelevant exceptions). Bids that are placed into the market, regardless of their duration are risking capital in order to participate. I have no obligation to buy from you at any price, and certainly can’t be forced to sell to you at a price I didn’t agree on.
Fourth, market makers are generally firms that are willing to buy and sell at a given price, and are willing to provide liquidity to any given market on a consistent basis. Many of these firms get reductions in fees based on volume to incentivize trading.
Finally, liquidity is the availability of a given instrument. A highly liquid market means that there are active buyers and seller, and depth-of-book that can cover large orders. Price discovery is the process by which the best price is found.
With that out of the way, lets dive into some of the structural components of modern trading.
There isn’t a single market. Shares of any given equity may trade on many different markets, NYSE, NASDAQ, Bats, NSX, Arca, Alternext, etc. The system is distributed, such that it is possible to execute orders across multiple markets simultaneously. Arbitrage is the process of making the difference between two separate markets. For example, if two different exchanges list the same equity at different prices one could make money by buying in the first and selling in the second, thereby capturing the spread (difference in pricing).
The reason that prices are often identical across markets is because of arbitrage pressure causing price convergence. The speed at which this convergence occurs is a race between participants looking to make the spread. Many High Frequency Trading companies play this game all day long between markets all over the world. It is a simple idea, that is technically difficult to execute on and highly competitive.
Markets often are connected together and in certain circumstances can route orders off market in order to satisfy them. However participants have an incentive to connect to multiple markets in order to increase available liquidity, match large orders across markets, and get best-price execution. In other words the more data you can ingest and the faster you can do it the better pricing you can theoretically get. When you place an order on a commercial service they generally show you the best price available anywhere based on the information they have.
The flow of information across markets is extraordinarily fast. Geographic distances at the speed of light in fiber or microwave propagation are usually in the milliseconds. Modern platforms can match and execute orders internally in the microseconds (and in certain cases in the nanoseconds!). The faster this process happens, the quicker we have price convergence across geographic areas. A human being might not be able to decide in a microsecond whether he wants to buy something, but he can tell a computer that he wanted the lowest available price. Modern trading offloads intent into computers because fluctuations in time and the complexity of a distributed exchange system makes entirely human trading completely obsolete in many markets with public price discovery.
Alright, that was a lot to digest. The implications of algorithmic trading are pretty widespread, and there is an argument to be made on both sides as to whether or not it is necessary. It is important to note that major exchanges make their money by charging transactional fees per trade, and that the majority of the volume in any given market is being driven by algorithmic trading systems. So on the one hand we have public exchanges that must follow the rules and regulations, but are benefitting from the way the system is currently constructed because massive volume directly affects their bottom lines. I believe that market competition is a good thing, and that arbitrary rules and regulations have a detrimental affect on the industry. In the 1980s you could spend a significant amount of money in brokerage fees, as well as get pretty terrible pricing on top of that because of lack of transparent price discovery. It is an unpopular opinion amongst those that think that HFT is robbing them silently in the night, but I believe that $4.95 trades on E-Trade are directly possible because of how computers have changed the trading landscape, increased competition and ultimately passed massive savings on to consumers. I could write an entire article about how HFT is actually helping the average person and is completely misunderstood, but on to the Flash Crash!
Now that we have some of the necessary background, let’s talk about that May day. In the official SEC-CFTC report, the Chicago Mercantile Exchange (CME) is fingered as the place the trouble started. Allegedly large future trades triggered instability in the underlying. Futures are derivative contracts that allow speculation on the future price of an underlying asset. The E-Mini tracks the S&P 500 Index price in the future. It can be constructed from a basket of stocks representing the S&P 500, or various collections of other contracts. A simple HFT strategy would be to trade that basket against the future value, back and forth all day long. So conceivably a very large trade on the E-Mini could cause various firms to rush to the equities markets to cover their positions. Given the size of the trading involved, I think it is highly unlikely that this happened, or at least that the affect was widespread enough to cause a systemic failure in the system. The CME’s counter-argument is compelling in this regard, but is too technical to discuss here.
Regardless of how it started, sell pressure increased on the NYSE as firms started unwinding positions which began to drive the price down. This is a normal occurrence, but what happened next is anything but normal. NYSE hiccuped. The increased amount of volume started queuing in their systems resulting in large delays in execution. In other words the time from placing an order until it executed started to spiral upwards quickly as their systems failed under the load. Now, if you remember from above the tolerances of many of these HFT systems is tight. Most firms began cancelling their orders from the market, or physically disconnecting to take advantage of market-supplied cancel-on-disconnect functionality because they assumed there was a systems fault in progress and didn’t want to get caught with orders that would execute in a market they couldn’t trade in. Liquidity was routed to other providers that were known to be good, and NYSE was dropped by HFT firms providing liquidity because it was clear they weren’t functioning properly.
When all of the orders in a market disappear the book starts to become spectacularly thin. This means that people sitting around way off the market, with test orders or just generally hanging out could end up having a trade go through at an outrageous price. This is exactly what happened, people with market orders (taking the best available price in the market) were suddenly buying and selling way off market because all of the sophisticated trading firms had taken their liquidity elsewhere while NYSE was getting their act together. This resulted in a 600 point drop as multiple stocks thinned out.
Tellingly, minutes later when NYSE restored normal operation the pricing rapidly normalized as trading firms that were locked out of the market reentered once they had determined that systems were functioning normally. In other words, the combined logic of the various HFT firms resulted in routing around systemic failure, and immediately reinforcing the market once it had been restored. This is exactly the functionality one wants to see in a resilient distributed market.
Let’s change the situation. There is no way for an HFT firm to determine what was going on at NYSE instantly. It could have been the firm’s fault, a faulty computer, a broken cable, a sliced fiber. The exchange itself could have exploded due to a nuclear attack, an earthquake, a tidal wave or another act of God. Yet, despite being incommunicado trading continued normally at other exchanges, and full service was restored within minutes at the NYSE. That is called good engineering.
When things go wrong, everybody looks for a scape goat. HFT was a good one. Banks are uncomfortable with firms eating their tasty margins (look at the recent IEX as an example of this). NYSE doesn’t want to admit fault of any kind, and HFT firms are hard pressed to explain what they do, or how it positively affects other participants in the market. It’s easy to point fingers. Nanex made some incredibly flawed claims based on data that cannot possibly support their conclusions, most of it aggregated tick-data which doesn’t reveal individual firm intent, but anti-HFT advocates jumped at the chance of having concrete proof that their worse fears were realized. The reality is always a bit more nuanced.
This may not have convinced you, it is only one persons account, but at the very least you should consider the possibility that it is correct, and that any preconceived notions you have about the affects HFT has on the market may not be. These are complex and dynamic systems, and they can interact in unexpected ways, but in this case they did the right thing.