Subscribe to our monthly newsletter for the latest insights, commentary and strategy results.
Subscribe to Our NewsletterIn 1985, Philip Tetlock, a Professor at the Wharton School of Business, set out to ascertain how accurate expert forecasters are in their predictions of future events.
Tetlock’s study covered many areas, including economics, politics, financial markets, climate, military strategy, etc. His analysis spanned nearly 20 years, during which he interviewed 284 experts and obtained roughly 82,000 forecasts. Tetlock’s expansive research led him to conclude that:
1. Expert forecasts were less accurate than random guesses.
2. Aggregate (average) forecasts were superior to individual forecasts but were still inferior to random guesses.
3. Experts who made the most media appearances were the least accurate.
In short, you would have been better off throwing darts while blindfolded than following the advice of experts.
But Investment Professionals Are the Exception – NOT!
Successful wealth management is predicated on both security selection and asset allocation. In order to achieve long-term results that are better than those which could be obtained by flipping a coin, investors must be able to:
1. Select outperforming stocks/sectors, and/or
2. Identify outperforming asset classes (whether stocks will outperform bonds, which countries will outperform, etc.).
Unfortunately, there is ample evidence which demonstrates that these two skills are in extremely short supply, leaving most clients holding the proverbial bag of underperformance.
Stock Picking: A Zero-Sum Game
Nobel Prize winning economist Eugene Fama is widely heralded as "the father of modern finance". According to Fama:
“Active management in aggregate is a zero-sum game. Good active managers can win only at the expense of bad active managers. Any time an active manager makes money by overweighting a stock, he wins because other active managers react by underweighting a stock. The two sides always net out – before the costs of active management. After costs, active management is a negative-sum game by the amount of costs borne by investors.
After costs, only the top 3% of managers produce a return that indicates they have sufficient skill to just cover their costs, which means that going forward, even the top performers are expected to be only as good as a low-cost passive index fund. The other 97% can be expected to do worse. It is a matter of arithmetic that investors who go with active management must on average lose by the amount of fees and expenses incurred.”
Time: Active Management’s Nemesis
Both Standard & Poors’ Index vs. Active (SPIVA) scorecards and Vanguard’s Case for Indexing reports have repeatedly demonstrated that while some managers do outperform, it typically is not by much and not for long. The following table strongly suggests that the longer a portfolio of actively managed funds is held, the greater chance it has of underperforming one comprised of index funds.
What About Risk?
A common defense offered by active managers is that their superior risk management/lower volatility more than makes up for their inability to outperform their benchmarks. However, as the table below shows, this claim is not supported by the evidence. In comparison with their index fund counterparts, actively managed portfolios have on average performed just as poorly on a risk-adjusted basis as they have on a raw return basis.
To be clear, we do not believe that active funds cannot beat their benchmarks because the evidence shows that some can. There have been and always will be those that can add value. However, even the most astute investors cannot predict which funds will outperform and over what period, making the exercise of predicting winning managers somewhat of a fool’s errand. In other words, it is possible to outperform by selecting winning managers, it’s just not probable.
What About All Those Smart People? Benjamin Graham Spawned A Tribe of Cannibals
Benjamin Graham is the architect of modern security analysis, which uses fundamental data (income statement and balance sheet items) to determine the fair value of stock prices. Graham’s approach was way ahead of its time and remained largely undiscovered for several decades. This enabled him to achieve an annualized return of roughly 20% from 1936 to 1956, as compared to 12.2% for the overall market. Warren Buffet describes Graham’s The Intelligent Investor (1949) as “the best book about investing ever written.”
Today, there is no shortage of adherents to Graham’s approach. The current active management universe is both inundated with and driven by Graham-style security analysis. Stocks are constantly and continuously scrutinized by armies of security analysts, armed with reams of widely available fundamental data.
It’s hard enough to outperform in an area heavily populated by “smart money”. It’s even harder to outperform when you are trying to make money the same way as all those smart people! Graham’s approach has become a victim of its own success, whereby its followers are literally eating each other’s lunch to the point where most of them fail to add value. All the smart people are doing the same smart things, which causes their results to look not so smart. In the end, Adam Smith hath vanquished Benjamin Graham!
No Reprieve from Asset Allocation
What about asset allocation? Maybe investment professionals who cannot select outperforming stocks can add value by identifying when stocks will out/underperform bonds, which countries/regions will outperform, etc. As is the case with stock-picking, investment professionals have struggled with asset allocation. To be clear, nobody can predict the future with certainty, and expecting anyone to do so would be unrealistic. That being said, strategists’ track record of forecasting market returns has been notably poor.
And the Winner Is…The Blind Forecaster
There are 22 “chief market strategists” at Wall Street’s biggest banks and investment firms, who work at storied firms such as Goldman Sachs and Morgan Stanley. They have access to the best information, the smartest economists, and teams of analysts. One of their most important jobs is forecasting how the stock market will perform over the next year, which they do every January by predicting where the S&P 500 Index will close at the end of the following December.
In the 15 years from 2000 to 2014, their predictions were off by an average of 14.7% per year. To put this in perspective, imagine a guy called the Blind Forecaster. He’s an average Joe who assumes each year that the market will rise by 9%, which is roughly equivalent to its long-term average. As it turns out, the Blind Forecaster doesn’t seem like such a fool after all (at least not compared to most Wall Street strategists). Using his simple uninformed rule, his average miss of 14.1% would have been less than that of the Wall Street brainiacs.
Interestingly, the Blind Forecaster’s superior accuracy is not attributable to the global financial crisis of 2008, which you could write off as an unforeseeable “black swan” event. Excluding 2008, the strategists’ average error was 12%, as compared to 11.6% for the Blind Forecaster. The simpleton still wins.
Maybe There is Something to This Machine Learning Stuff After All
It is disheartening that the very same experts we rely on have no greater forecasting accuracy than that which can be achieved by flipping a coin. However, there is a silver lining.
In addition to evaluating the accuracy of expert forecasts, Philip Tetlock also tested the efficacy of rules-based statistical models. He found that not only did the models outperform the experts, but also that their results were superior to those which could have been achieved by random chance.
There is a Central American saying that goes something like, “If one person tells you you’re drunk, and you feel fine, ignore him. If ten people tell you you’re drunk, lie down.” Both Tetlock’s analysis and scores of other academic studies demonstrate that rules-driven decisions based on data and statistical analysis are superior to those based on the subjective opinions of experts.
We are not “experts” in the traditional sense. However, the Outcome funds do leverage our expertise in big data analysis and machine learning. This algorithms over experts approach has enabled our strategies to deliver superior risk adjusted returns, which we believe will continue