with Jeremiah Chafkin and Robert Sinnott, Journal of Index Investing 2(2011), 12–35.
Implicit in most asset-allocation policies is the statistical assumption of “stationarity,” which means that the means, variances, and covariances of asset returns are assumed to be constant over time. This assumption is a reasonable approximation during normal market conditions but fails dramatically during periods of market turmoil and dislocation. In such periods, market volatility is highly dynamic, correlations can jump to 100% in a matter of days, and risk premia can become negative for months at a time. FTSE and AlphaSimplex Group have developed a family of rule-driven (passive), transparent, and high-capacity indices whose volatilities are rescaled as often as daily with the goal of maintaining more stable risk levels. By stabilizing the risk of each asset class over time, the FTSE StableRisk Indices have the potential to capture the long-term risk premia of asset classes and simple strategies with less severe maximum drawdowns than those of traditional indices, which have no risk controls.
with Alexander D. Healy, The Handbook of News Analytics in Finance
As financial markets grow in size and complexity, risk management protocols must also evolve to address more challenging demands. One of the most difficult of these challenges is managing event risk, the risk posed by unanticipated news that causes major market moves over short time intervals. Often cited but rarely managed, event risk has been relegated to the domain of qualitative judgment and discretion because of its heterogeneity and velocity. In this chapter, we describe one initiative aimed at solving this problem. The Thomson Reuters NewsScope Event Indices Project is an integrated framework for incorporating real-time news from the Thomson Reuters NewsScope subscription service into systematic investment and risk management protocols. The framework consists of a set of real-time event indices—each one taking on numerical values between 0 and 100—designed to capture the occurrence of unusual events of a particular kind. Each index is constructed by applying disciplined pattern recognition algorithms to real-time news feeds, and validated using econometric methods applied to historical data.
with Jasmina Hasanhodzic and Emanuele Viola, Quantitative Finance 7, 1043-1050
We propose to study market efficiency from a computational viewpoint. Borrowing from theoretical computer science, we define a market to be efficient with respect to resources S (e.g., time, memory) if no strategy using resources S can make a profit. As a first step, we consider memory-m strategies whose action at time t depends only on the m previous observations at times t - m,...,t - 1. We introduce and study a simple model of market evolution, where strategies impact the market by their decision to buy or sell. We show that the effect of optimal strategies using memory m can lead to "market conditions" that were not present initially, such as (1) market bubbles and (2) the possibility for a strategy using memory m' > m to make a bigger profit than was initially possible. We suggest ours as a framework to rationalize the technological arms race of quantitative trading firms.
with Jasmina Hasanhodzic
One of the most enduring empirical regularities in equity markets is the inverse relationship between stock prices and volatility, first documented by Black (1976) who attributed it to the effects of financial leverage. As a company's stock price declines, it becomes more highly leveraged given a fixed level of debt outstanding, and this increase in leverage induces a higher equity-return volatility. In a sample of all-equity-financed companies from January 1972 to December 2008, we find that the leverage effect is just as strong if not stronger, implying that the inverse relationship between price and volatility is not driven by financial leverage.
Journal of Economic Literature 50 (2012), 151-178.
The recent financial crisis has generated many distinct perspectives from various quarters. In this article, I review a diverse set of 21 books on the crisis, 11 written by academics, and 10 written by journalists and one former Treasury Secretary. No single narrative emerges from this broad and often contradictory collection of interpretations, but the sheer variety of conclusions is informative, and underscores the desperate need for the economics profession to establish a single set of facts from which more accurate inferences and narratives can be constructed.
with Monica Billio, Mila Getmansky, and Loriana Pelizzon
A significant contributing factor to the Financial Crisis of 2007–2009 was the apparent interconnectedness among hedge funds, banks, brokers, and insurance companies, which amplified shocks into systemic events. In this paper, we propose five measures of systemic risk based on statistical relations among the market returns of these four types of financial institutions. Using correlations, cross-autocorrelations, principal components analysis, regime-switching models, and Granger causality tests, we find that all four sectors have become highly interrelated and less liquid over the past decade, increasing the level of systemic risk in the finance and insurance industries. These measures can also identify and quantify financial crisis periods. Our results suggest that while hedge funds can provide early indications of market dislocation, their contributions to systemic risk may not be as significant as those of banks, insurance companies, and brokers who take on risks more appropriate for hedge funds.
with Amir Khandani, Journal of Financial Markets 14 (2011), 1-46.
During the week of August 6, 2007, a number of quantitative long/short equity hedge funds experienced unprecedented losses. It has been hypothesized that a coordinated deleveraging of similarly constructed portfolios caused this temporary dislocation in the market. Using the simulated returns of long/short equity portfolios based on five specific valuation factors, we find evidence that the unwinding of these portfolios began in July 2007 and continued until the end of 2007. Using transactions data, we find that the simulated returns of a simple market-making strategy were significantly negative during the week of August 6, 2007, but positive before and after, suggesting that the Quant Meltdown of August 2007 was the combined effects of portfolio deleveraging throughout July and the first week of August, and a temporary withdrawal of market-making risk capital starting August 8th. Our simulations point to two unwinds—a mini-unwind on August 1st starting at 10:45am and ending at 11:30am, and a more sustained unwind starting at the open on August 6th and ending at 1:00pm—that began with stocks in the financial sector and long Book-to-Market and short Earnings Momentum. These conjectures have significant implications for the systemic risks posed by the hedge-fund industry.
with Eric Fielding and Jian Helen Yang, Journal of Investment Management 9 (2011), 17-49.
We propose the National Transportation Safety Board (NTSB) as a model organization for addressing systemic risk in industries and contexts other than transportation. When adopted by regulatory agencies and the transportation industry, the safety recommendations of the NTSB have been remarkably effective in reductin the number of fatalities in various modes of transportation since the NTSB's inception in 1967 as an independent agency. Formerly part of the Civil Aeronautics Board (now the Federal Aviation Administration), the NTSB has no regulatory authority and is solely focused on conducting forensic investigations of transportation accidents and proposing safety recommendations. With only 400 full-time employees, the NTSB has a much larger network of experts drawn from other government agencies and the private sector who are on call to assist in accident investigations on an as-needed basis. By allowing and encouraging the participation of all interested parties in its investigations, the NTSB is able to produce definitive analyses of even the most complex accidents and provide genuinely actionable measures for reducing the chances of future accidents. We believe it is possible to create more efficient and effective systemic-risk management processes in many other industries, including the financial services industry, by studying the organizational structure and functions of the NTSB.
with Ely Dahan, Adlar J. Kim, Tomaso Poggio, and Nicholas T. Chan, Journal of Marketing Research, 48 (2011), 497-517.
Identifying winning new product concepts can be a challenging process that requires insight into private consumer preferences. To measure consumer preferences for new product concepts, the authors apply a 'securities of trading of concepts,' or STOC, approach, in which new product concepts are traded as financial securities. The authors apply this method because market prices are known to efficiently collect and aggregate private information regarding the economic value of goods, sevices, and firms, particularly when trading financial securities. This research compares the STOC approach against stated-choice, conjoint, constant-sum, and longitudinal revealed-preference data. The authors also place STOC in the context of previous research on prediction markets and experimental economics. The authors conduct a series of experiments in multiple product categories to test whether STOC (1) is more cost efficient than other methods, (2) passes validity tests, (3) measures expectations of others, and (4) reveals individual preferences, not just those of the crowd. The results also show that traders exhibit bias on the basis of self-preferences when trading. Ultimately, STOC offers two key advantages over traditional market research methods: cost efficiency and scalability. For new product development teams deciding how to invest resources, this scalability may be especially important in the Web 2.0 world, in which customers are constantly interacting with firms and one another in suggesting numerous product design possibilities that need to be screened.
with Thomas Brennan, Quarterly Journal of Finance 1 (2011), 55-108.
We propose a single evolutionary explanation for the origin of several behaviors that have been observed in organisms ranging from ants to human subjects, including risk-sensitive foraging, risk aversion, loss aversion, probability matching, randomization, and diversification. Given an initial population of individuals, each assigned a purely arbitrary behavior with respect to a binary choice problem, and assuming that offspring behave identically to their parents, only those behaviors linked to reproductive success will survive, and less reproductively successful behaviors will disappear at exponential rates. This framework generates a surprisingly rich set of behaviors, and the simplicity and generality of our model suggest that these behaviors are primitive and universal.