Subscribe to our monthly newsletter for the latest insights, commentary and strategy results.Subscribe to Our Newsletter
Why We Seek the Advice of Experts
In many aspects of our lives, we defer to the judgement of experts, whether they be doctors, lawyers, or investment managers. People rely on experts’ training, experience, and intuition, which should enable them to make better decisions than lay people in their respective fields. Unless you graduated from medical school, it would be inadvisable for you to self-diagnose. Similarly, one would think that professional investors possess the knowledge and expertise to achieve superior results than their clients could achieve on their own.
Out Of the Fire and Into the Frying Pan
We readily concede that in most cases, experts are capable of making better decisions than their non-professional peers. However, the simple fact is that experts are not as logical and unbiased as you may believe. Specifically, there is a robust body of academic literature spanning over 60 years that clearly illustrates that experts produce worse results than data driven rules-based models.
In 1982, James Simons, an award-winning mathematician, founded investment company Renaissance Technologies (RenTec). The firm strictly adheres to mathematical and statistical methods and is regarded as the most successful hedge fund in the world. Its signature Medallion fund is famed for having the best investing record in history, returning more than 66% annualized before fees and 39% after fees over a 30-year span from 1988 to 2018. Simons stated:
"If you do fundamental trading, one morning you feel like a genius, the next day you feel like an idiot…by 1998 I decided we would go 100% models…we slavishly follow the model. You do whatever it [the model] says no matter how dumb or smart you think it is. And that turned out to be a wonderful business.”
Simons’ sentiments are echoed by legendary investor Ray Dalio. Dalio is the founder of Bridgewater Associates, the world’s largest hedge fund, and is regarded as one of the greatest innovators in the finance world. Over the last 20 years, Bridgewater’s Pure Alpha Fund has delivered a near 20% annual compound return before fees. Dalio believes that everything can be analyzed and quantified. He stated that 99% of the time he agrees with the output of Bridgewater’s quantitative investment models. He also confessed that on the rare occasions when he has disagreed with the machine, it was right 66% of the time.
Experts? We Don’t Need No Stinkin’ Experts!
It seems logical that someone with an MBA from a top business school and decades of experience can beat a rules-based model. The expert’s hypothesis, which asserts that experts outperform models, is predicated on the following statements:
1. Experts have access to qualitative information.
2. Experts have more data.
3. Experts possess intuition and experience.
In theory, these attributes should result in superior decisions and results. However, the evidence demonstrates that these alleged advantages are anything but. There is an abundance of anecdotal and empirical evidence that suggests that the three pillars underlying the expert’s hypothesis not only fail to translate into superior decisions, but generally tend to lead to inferior results.
More Is Less, Both in Football and in Markets
Intuitively, having access to more information should lead to better decisions. However, studies have shown that the opposite is likely to be the case.
Meredith Whitney became famous for predicting the banking crisis of 2008. In a December 2010 segment of 60 Minutes, she outlined her gloomy forecast for the municipal bond market, stating that there would be 50 to 100 sizeable defaults.
Whitney had the best qualitative and quantitative data available. She had access to important people in local and state governments who provided her with privileged “soft” information and had analyzed thousands of pages of municipal bond term-sheets and macroeconomic research reports. Whitney’s access, coupled with her previous experience and success, made her prediction about the municipal bond market compelling to both the media and other investment professionals.
Nearly two years later, the Wall Street Journal published a stinging article entitled “Meredith Whitney Blew a Call – And then Some”, which pointed out that there had been just five defaults in the muni market. Ms. Whitney was off by a factor of 10.
Professors Claire Tsai, in collaboration with colleagues at the University of Chicago, tested the relationship between information and forecast accuracy. In their study, they selected participants who self-identified as being knowledgeable about college football and presented them with varying amounts of data points. The researchers then asked the subjects to forecast the outcomes of football games and to rate their confidence in their predictions. Although the participants who received more information had greater confidence in their predictions than their less informed counterparts, their forecasts were no more accurate. The experts incorrectly interpreted more information as better information.
In a similarly purposed study, a team of theoretical physicists and social scientists created an artificial trading environment where participants were randomly given nine different levels of information sets ranging from next to no information to perfect insider information. Predictably, the traders with inside information managed to beat the market. Surprisingly, participants with mid-level information underperformed both the market and their completely uninformed peers, who achieved market returns.
The partially informed traders should have outperformed their partially informed peers due to the former’s access to superior information. However, they failed to do so because their overconfidence caused them to overvalue their information, which in turn prevented them from using it effectively.
In the immortal words of Mark Twain, “It ain’t what you know that gets you into trouble. It’s what you know for sure that just ain’t so.”
Brain Impairment and Investing: When 1 + 1 < 2
A 1984 study entitled “Clinical Detection of Intellectual Deterioration Associated with Brain Damage” pitted experienced psychologists against a simple prediction algorithm to see who could more accurately classify brain impairment in patients. On the one hand, the psychologists were equipped with their vast experience and intuition. The model, on the other hand, was based on a statistical model of prior data. The algorithm crushed its opponents, exhibiting a classification accuracy of 83.3% vs. 58.3% for the experienced clinicians.
The researchers then provided the psychologists with the model to determine if, armed with its output, they could improve its stand-alone accuracy. Prior to the experiment, the experts were informed that the model had “previously demonstrated high predictive validity in identifying the presence or absence of intellectual deterioration associated with brain damage.” With the model in hand, the experienced clinicians improved their accuracy from 58.3% to 75%, still falling well short of the stand-alone model’s accuracy rate of 83.3%. Clearly, the experts couldn’t help themselves from overriding the model with their judgement, which served to hinder rather than improve performance.
Studies have shown similar results in the world of investing. Joel Greenblatt is the well-known author of bestselling book The Little Book that Beats the Market. His firm, Formula Investing, uses a simple rules-based algorithm that buys companies that rank favorably on a purely quantitative combination of valuation and quality metrics. The firm offers its clients the following two types of accounts:
1. Professionally managed accounts, where stocks are chosen exclusively by Greenblatt’s model.
2. Self-managed accounts, where clients receive the model’s recommendations and are free to alter the portfolio based on their discretionary judgement and intuition.
As was the case with classifying brain impairments, adding human intuition and discretion to the model’s output served to detract from performance. Over a two-year period from May 2009 to April 2011, the professionally managed accounts achieved an average return of 84.1%, outperforming the S&P 500 by 21.4%. On the other hand, the self-managed accounts returned 59.4%, underperforming the S&P 500 by 3.3% and underperforming the purely model-driven accounts by 24.7%.
Romance is for the Bedroom, NOT the Boardroom
The assumptions underlying the expert’s hypothesis are empirically invalid for the simple reason that neither qualitative information, more information, or experience increase forecasting accuracy.
The notion that intuition possesses transformative power is inherently romantic. It elevates decision making above the drab world of statistics and spreadsheets and turns it into an art form. The office becomes a place of inspiration and vision rather than one of banal number-crunching. But the simple fact is that for decision making to be effective, it must be systematic. People are better off following data-driven rules-based processes based on empirical evidence. This conclusion is well-summarized by renowned academic Paul E. Meehl, former President of the American Psychological Association, who stated:
“There is no controversy in social science that shows such a large body of qualitatively diverse studies coming out so uniformly in the same direction as this one [models outperform experts].”
At Outcome, our approach to markets is in no way predicated or dependent on our expertise in making investment decisions. Rather, they are based on our collective ability to analyze data and construct empirically driven, rules-based models. While there never has been, nor will there ever be any certainties in markets, we believe that our models will produce superior risk-adjusted returns than their expert competitors over the long-term.