Research
Lo, Andrew W. (2016), What Is an Index?, Journal of Portfolio Management 42 (2), 21–36.
View abstract
Hide abstract
Technological advances in telecommunications, securities exchanges, and algorithmic trading have facilitated a host of new investment products that resemble theme-based passive indexes but which depart from traditional market-cap-weighted portfolios. I propose broadening the definition of an index using a functional perspective—any portfolio strategy that satisfies three properties should be considered an index: (1) it is completely transparent; (2) it is investable; and (3) it is systematic, i.e., it is entirely rules-based and contains no judgment or unique investment skill. Portfolios satisfying these properties that are not market-cap-weighted are given a new name: “dynamic indexes.” This functional definition widens the universe of possibilities and, most importantly, decouples risk management from alpha generation. Passive strategies can and should be actively risk managed, and I provide a simple example of how this can be achieved. Dynamic indexes also create new challenges of which the most significant is backtest bias, and I conclude with a proposal for managing this risk.
Law Is Code: A Software Engineering Approach to Analyzing the United States Code
Li, William, Pablo D. Azar, David Larochelle, Phil Hill, and Andrew W. Lo (2015), Law Is Code: A Software Engineering Approach to Analyzing the United States Code, Journal of Business & Technology Law 10 (2), 297–374.
View abstract
Hide abstract
The agglomeration of rules and regulations over time has produced a body of legal code that no single individual can fully comprehend. This complexity produces inefficiencies, makes the processes of understanding and changing the law difficult,and frustrates the fundamental principle that the law should provide fair notice to the governed. In this Article, we take a quantitative, unbiased, and software-engineering approach to analyze the evolution of the United States Code from 1926 to today. Software engineers frequently face the challenge of understanding and managing large, structured collections of instructions, directives, and conditional statements, and we adapt and apply their techniques to the U.S. Code over time. Our work produces insights into the structure of the U.S. Code as a whole, its strengths and vulnerabilities, and new ways of thinking about individual laws. For example, we identify the first appearance and spread of important terms in the U.S. Code like “whistleblower” and “privacy.” We also analyze and visualize the network structure of certain substantial reforms,including the Patient Protection and Affordable Care Act and the Dodd-Frank Wall Street Reform and Consumer Protection Act, and show how the interconnections of references can increase complexity and create the potential for unintended consequences. Our work is a timely illustration of computational approaches to law as the legal profession embraces technology for scholarship in order to increase efficiency and to improve access to justice.
Brennan, Thomas J., and Andrew W. Lo (2015), Reply to "(Im)Possible Frontiers: A Comment," Critical Finance Review 4 (1), 157–171.
View abstract
Hide abstract
In Brennan and Lo (2010), a mean-variance efficient frontier is defined as “impossible” if every portfolio on that frontier has negative weights, which is incompatible with the Capital Asset Pricing Model (CAPM) requirement that the market portfolio is mean-variance efficient. We prove that as the number of assets n grows, the probability that a randomly chosen frontier is impossible tends to one at a geometric rate, implying that the set of parameters for which the CAPM holds is extremely rare. Levy and Roll (2014) argue that while such “possible”frontiers are rare, they are ubiquitous. In this reply, we show that this is not the case; possible frontiers are not only rare,but they occupy an isolated region of mean-variance parameter space that becomes increasingly remote as n increases. Ingersoll (2014) observes that parameter restrictions can rule out impossible frontiers, but in many cases these restrictions contradict empirical fact and must be questioned rather than blindly imposed.
Forman, Sandra M., Andrew W. Lo, Monica Shilling, and Grace K. Sweeney (2015), Funding Translational Medicine via Public Markets: The Business Development Company, Journal of Investment Management 13 (4), 9–32.
View abstract
Hide abstract
A business development company (BDC) is a type of closed-end investment fund with certain relaxed requirements that allow it to raise money in the public equity and debt markets, and can be used to fund multiple early-stage biomedical ventures, using financial diversification to de-risk translational medicine. By electing to be a “Regulated Investment Company” for tax purposes, a BDC can avoid double taxation on income and net capital gains distributed to its shareholders. BDCs are ideally suited for long-term investors in biomedical innovation, including: (i) investors with biomedical expertise who understand the risks of the FDA approval process, (ii) “banking entities,” now prohibited from investing in hedge funds and private equity funds by the Volcker Rule, but who are permitted to invest in BDCs, subject to certain restrictions, and (iii) retail investors, who traditionally have had to invest in large pharmaceutical companies to gain exposure to similar assets. We describe the history of BDCs, summarize the requirements for creating and managing them, and conclude with a discussion of the advantages and disadvantages of the BDC structure for funding biomedical innovation.
Is the FDA Too Conservative or Too Aggressive?: A Bayesian Decision Analysis of Clinical Trial Design
Isakov, Leah, Andrew W. Lo, and Vahid Montazerhodjat (2019), Is the FDA Too Conservative or Too Aggressive?: A Bayesian Decision Analysis of Clinical Trial Design, Journal of Econometrics 211 (1), 117–136.
View abstract
Hide abstract
Implicit in the drug-approval process is a trade-off between Type I and Type II error. We propose using Bayesian decision analysis (BDA) to minimize the expected cost of drug approval, where relative costs are calibrated using U.S. Burden of Disease Study 2010 data. The results for conventional fixed-sample randomized clinical-trial designs suggest that for terminal illnesses with no existing therapies such as pancreatic cancer, the standard threshold of 2.5% is too conservative; the BDA-optimal threshold is 27.9%. However, for relatively less deadly conditions such as prostate cancer, 2.5% may be too risk-tolerant or aggressive; the BDA-optimal threshold is 1.2%. We compute BDA-optimal sizes for 25 of the most lethal diseases and show how a BDA-informed approval process can incorporate all stakeholders’ views in a systematic, transparent, internally consistent, and repeatable manner.
Getmansky, Mila, Peter A. Lee, and Andrew W. Lo (2015), Hedge Funds: A Dynamic Industry in Transition, Annual Review of Financial Economics 7 (1), 483–577.
View abstract
Hide abstract
The hedge-fund industry has grown rapidly over the past two decades, offering investors unique investment opportunities that often reflect more complex risk exposures than those of traditional investments. In this article, we present a selective review of the recent academic literature on hedge funds as well as updated empirical results for this industry. Our review is written from several distinct perspectives: the investor’s, the portfolio manager’s, the regulator’s, and the academic’s. Each of these perspectives offers a different set of insights into the financial system, and the combination provides surprisingly rich implications for the Efficient Markets Hypothesis, investment management, systemic risk, financial regulation, and other aspects of financial theory and practice.
Return Smoothing, Liquidity Costs, and Investor Flows: Evidence from a Separate Account Platform
Cao, Charles, Grant Farnsworth, Bing Liang, and Andrew W. Lo (2017), Return Smoothing, Liquidity Costs, and Investor Flows: Evidence from a Separate Account Platform, Management Science 63 (7), 2233–2250.
View abstract
Hide abstract
We use a new dataset of hedge fund returns from a separate account platform to examine (1) how much of hedge fund return smoothing is due to main-fund specific factors, such as managerial reporting discretion (2) the costs of removing hedge fund share restrictions. These accounts trade pari passu with matching hedge funds but feature third-party reporting and permissive share restrictions. We use these properties to estimate that 33% of reported smoothing is due to managerial reporting methods. The platform's fund-level liquidity is associated with costs of 1.7% annually. Investor flows chase monthly past performance on the platform but not in the associated funds.
Bisias, Dimitrios, Mark Flood, Andrew W. Lo, and Stavros Valavanis (2012), A Survey of Systemic Risk Analytics, Annual Review of Financial Economics 4 (1), 255–296.
View abstract
Hide abstract
We provide a survey of 31 quantitative measures of systemic risk in the economics and finance literature, chosen to span key themes and issues in systemic risk measurement and management. We motivate these measures from the supervisory, research, and data perspectives in the main text, and present concise definitions of each risk measure--including required inputs, expected outputs, and data requirements--in an extensive appendix. To encourage experimentation and innovation among as broad an audience as possible, we have developed open-source Matlab code for most of the analytics surveyed, available for download above.
Hasanhodzic, Jasmina, Andrew W. Lo, and Emanuele Viola (2011), A Computational View of Market Efficiency, Quantitative Finance 11 (7), 1043–1050.
View abstract
Hide abstract
We propose to study market efficiency from a computational viewpoint. Borrowing from theoretical computer science, we define a market to be efficient with respect to resources S (e.g., time, memory) if no strategy using resources S can make a profit. As a first step, we consider memory-m strategies whose action at time t depends only on the m previous observations at times t - m,...,t - 1. We introduce and study a simple model of market evolution, where strategies impact the market by their decision to buy or sell. We show that the effect of optimal strategies using memory m can lead to "market conditions" that were not present initially, such as (1) market bubbles and (2) the possibility for a strategy using memory m' > m to make a bigger profit than was initially possible. We suggest ours as a framework to rationalize the technological arms race of quantitative trading firms.
Lo, Andrew W. (2016), The Gordon Gekko Effect: The Role of Culture in the Financial Industry, FRBNY Economic Policy Review 22 (1), 17–42.
View abstract
Hide abstract
Culture is a potent force in shaping individual and group behavior, yet it has received scant attention in the context of financial risk management and the recent financial crisis. I present a brief overview of the role of culture according to psychologists, sociologists, and economists, and then present a specific framework for analyzing culture in the context of financial practices and institutions in which three questions are answered: (1) What is culture?; (2) Does it matter?; and (3) Can it be changed? I illustrate the utility of this framework by applying it to five concrete situations—Long Term Capital Management; AIG Financial Products; Lehman Brothers and Repo 105; Société Générale’s rogue trader; and the SEC and the Madoff Ponzi scheme—and conclude with a proposal to change culture via “behavioral risk management.”