When I asked Tim McCusker, CIO at consulting firm NEPC,
about active managers increasing use of artificial
intelligence, he paraphrased the Roman historian Suetonius:
AI investing is not going away.
The evidence is on McCuskers side. According to Business Insider, at a recent J.P. Morgan
conference, the bank asked 237 investors about big data and
machine learning and found that 70% thought that the
importance of these tools will gradually grow for all
investors. A further 23% said they expected a revolution, with
rapid changes to the investment landscape.
Such investor interest signals both the frustration with
current active and specifically quant managers
and the nascent promise shown by AI hedge funds.
Whatever the cause, AI investing presents consultants and
asset owners with a serious challenge.
Our industry takes as a universal truth that investment
processes should be understandable. As part of the kabuki
theater we call investment due diligence, asset owners and
consultants require that a manager be able to explain its
strategy and model.
A manager reveals just enough of its investment process to
provide allocators with a cairn from which they can then orient
themselves and continue their assessment.
We must recognize that a traditional managers degree
of disclosure reflects a willful act: The manager could reveal
more but chooses not to because, it claims, doing so would put
its process at risk. Its probably more truthful that
sharing too much would reveal the paucity of the process
However, though an AI manager can provide a general overview
of its approach (We use a recursive neural
network), it can provide no authentic narrative of its
investment process not because of willful deflection but
because, unlike a traditional manager, it has not hand-built
its investment model. The model built itself, and the manager
cannot fully explain that models investment
Think of a traditional manager as Deep Blue, a
human-designed program that used such preselected techniques as
decision trees and if/then statements to defeat chess
grandmaster Garry Kasparov in 1997. Think of an AI manager as
DeepMinds AlphaGo, which used deep learning to beat some
of the worlds best Go players. (Go is an ancient Chinese
board game that is much more complex than chess and has more
possible moves than the total number of atoms in the visible universe.) Without explicit human
programming, AlphaGo created its own model that allows it to
make decisions better than its human opponents.
With enough time and training, we can explain why Deep Blue
made a certain chess move at a certain time. Although we can
observe how AlphaGo plays Go, we cannot explain why it made a
specific move at a specific point in time. As Yoshua Bengio, a pioneer of deep-learning
research, describes it: As soon as you have a complicated
enough machine, it becomes almost impossible to completely
explain what it does.
This is why an AI manager cannot explain its investment
process. The requirement of interpretability brings the
assessment of and by extension, the investment in
AI strategies to a screeching halt.
With AI investing, allocators face a new choice. Currently,
in an act of complicity, they choose access over knowledge
accepting a managers willfully limited disclosure
of its narrative but naively believing that the narrative does
exist and is known to the managers illuminati.
The new choice facing all AI consumers is more fundamental.
The choice, according to Aaron M. Bornstein, a researcher at the
Princeton Neuroscience Institute, is, Would we want to
know what will happen with high accuracy, or why something will
happen, at the expense of accuracy?
Requiring interpretability of investment strategies is a
vestige of old-world assumptions and is entirely unsatisfactory
for reasons that transcend investing: We either foreswear
certain types of knowledge (e.g., deep learninggenerated
medical diagnoses) or force such knowledge into conformity,
thereby lessening its discovered truths (do we really want our
smart cars to be less smart or our investment strategies to be
less powerful?). Moreover, this requirement smacks of
hypocrisy: Given what Erik Carleton of Textron calls the
often flimsy explanations of traditional active managers,
investors really dont know how their money is invested.
And conversely, who would not suspend this criterion given the
opportunity to invest in the Medallion Fund?
We need to better invest beneficial assets. AI investing can
help, but its adoption compels us to judge AI strategies not by
their degree of interpretability but by their results.
As scientist Selmer Bringsjord puts it, We are
heading into a black future, full of black boxes. Embrace