Why Investors Won’t Embrace AI

The biggest challenge quantitative managers face may not be developing powerful, predictive AI-based investment models, but convincing investors against trusting their own judgment, writes columnist Angelo Calvello.

Illustration by Institutional Investor

Illustration by Institutional Investor

I’ve staked my career on the view that advanced artificial intelligence methods will radically transform both our investment-decision-making processes and our business models.

With few exceptions, asset owners, asset managers, and consultants do not agree with my outlook on the future.

Some, like Jeff Gundlach, CIO of DoubleLine Capital, take a more dogmatic view: “I don’t believe in machines taking over finance at all.”

Others are more circumspect. They might claim to share my view — but a deeper investigation reveals they see AI as a fad or a hedge rather than as a catalyst for an inevitably different future.

It’s disconcerting that these usually critical thinkers fail to see that AI’s transformative power allows for better predictions than do traditional approaches. The disconnect is not because of the disparity between our worldviews, but because of the internal inconsistency of their own views.

These same individuals accept that in the past few years, advanced AI technologies have started to solve complex problems better than humans (e.g., Deep Mind’s AlphaGo and Mount Sinai’s Deep Patient) without explicit human programming.

Sponsored

They fully understand the commercial benefits of AI and can often explain how many traditional companies like Walmart, UPS, and Toyota have achieved commercial benefits by incorporating AI into their core businesses. More radically, they see how an old-world company like GE has used AI and big data to transform itself from a heavy industry and consumer product company into the world’s largest digital industrial company with the explicit objective of “using AI to solve the world’s most pressing problems across all our industrial business lines.”

The internal inconsistency of those who disagree with my AI worldview does not reflect a deficiency of cognitive powers. Rather, it’s the result of a specific behavioral bias: algorithm aversion.

Yes, “algorithm aversion” is a real phenomenon supported by a substantial body of academic research. It is best summarized by academics Berkeley Dietvorst, Joseph Simmons, and Cade Massey, who wrote in a 2014 paper, “Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster.” In fact, “people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster.”

As Man AHL chief investment officer Nick Granger explained it in a Bloomberg article: “[Algorithm aversion] shows people trust humans to do jobs even when, according to the evidence, computers are more effective.”

This behavioral bias explains how investors can hold the contradictory position that though AI can help other businesses make better predictions and decisions, it is of less value in investing.

This bias also explains why, in spite of algorithms’ forecasting power, we cling to the belief that this power could be improved with human input.

For example, hedge fund CQS’s Michael Hintze claims that “models are a great place to begin, but not necessarily a good place to finish. It is a team effort, and you need the analysts, traders, and portfolio managers with the skills, experience, and judgment to use and understand sophisticated financial models.”

Interestingly, Hintze’s comment reveals not only the problem of, but also a potential cure for, algo aversion: Give individuals a sense of control over the algos, and they are more likely to accept their forecasts.

Academic literature supports this compromise solution, but it seems more appropriate to cite an industry practitioner, Mark Rzepczynski, for an explanation of this point:

“If some modest amount of control is given to the decision maker, he will choose the algo over his own decision making. Allow the individual to adjust the model output by 10 percent, and [he is] happy. Allow the individual to reject 10 percent of the output from the model, and [he is] happy. Aversion falls when the human has some control over the model. You could say that the way to combat one behavior bias — algorithm aversion — can be through another bias, the illusion of control.”

The weakness of this accommodation is that though it might cause some investors to adopt AI, it dilutes the predictive power of the model, making it likely that those investors will be disappointed with the human/AI results.

I remain convinced that AI will transform asset management. However, I’ve come to see that the biggest challenge we face may not be developing powerful predictive AI-based investment models, but simply convincing investors not to trust their own judgment. More broadly, the winners and losers will be decided not by the current market position of a firm or even the size of its checkbook, but by its ability to overcome its anthropocentric prejudice and trust AI like it would trust a human being.

Related