This content is from: Portfolio

Why Machine Learning May Disappoint Investors

Artificial intelligence may finally be delivering on some promises, but it is demanding patience.

  • Jeffrey Kutler

If it sounds too good to be true, then it probably is. For more than half a century, that seemed to be the story of artificial intelligence. Even such visible breakthroughs as Amazon.com’s product-purchase suggestions, Apple’s Siri, and IBM Corp.’s Watson have not dispelled doubts about AI’s prospects for stand-alone commercial success.

Now an AI variant, machine learning, is making a run for the money. For industries like finance and medicine, which are awash in data and desperate for tools to help separate signals from noise, there is some hype here to believe in. Business models and strategies are being built around machine learning. But it is, by definition, evolutionary — the science and programming can be painstakingly trial-and-error — and it does not roll off assembly lines.

Consider sentiment analysis. For investors trying to make sense of market movements and reactions to events, sentiment insight — investor or market sentiments taken from news announcements or coverage, tweets, price movements, and other data points — is as close as a Bloomberg terminal. After several years of research and engineering, and building on computer-science advances like natural language processing, Bloomberg created, on a highly sophisticated scale, what machine-learning pioneers demonstrated early on with movie reviews. Viewers’ reactions to films — the words, contexts, and punctuation — could be parsed to produce an aggregate sentiment reading.

Also making headway in the financial world is AlphaSense, a search engine designed to meet the needs of investment and research professionals faster and better than something generic like Google. The San Francisco-based company has more than 500 clients — after six years in business and 10 years since co-founders Jack Kokko, a former Morgan Stanley senior analyst, and Raj Neervannan, who was chief technology officer of the ePolicy division of data aggregation company Choicepoint, started talking about their vision of a bespoke, intelligent query-and-search capability.

Why such long lead times?

Speaking in October at “Big Data in Finance,” a conference at the University of Michigan, Harvard University economics professor and behavioral economics scholar Sendhil Mullainathan explained that through the 1990s, research limitations held machine intelligence back. When, say, movie-sentiment results were imperfect, researchers’ response was to “get more words,” and there were diminishing returns on the effort.

More recently, he said, the approach changed to emphasize empirical data and testing its predictive power through feedback loops — a repetitive, iterative process of teaching or training the systems. That is essentially what goes into automated package-sorting at the post office, language translation programs, and self-driving cars, none of which are overnight sensations. Carnegie Mellon University, which claims to be the birthplace of autonomous vehicle technology, has been working on it since 1984.

Over the last six or seven years, the rise of big data paralleled progress in machine learning, supported by the availability of relatively cheap cloud computing services. Hence the emergence of powerful systems that excel at recognizing patterns, extracting signals from noise, and retaining knowledge from the experience. A natural outlet is cybersecurity, where machine intelligence can detect threats lurking within computer networks or gathering outside. Cases in point include BluVector, a “threat hunting” offering of McLean, Virginia-based Acuity Solutions Corp., which came out in 2015 after five years of development; and San Diego’s DB Networks, which took four years to bring out machine-learning database security technology, in 2013, and in March announced a capability to detect credential abuse in real time. The future of AI, says DB chairman and CEO Brett Helm, is to “not only block attacks but also automatically heal the vulnerabilities.”

Other applications are brewing in regulation. The Financial Industry Regulatory Authority is taking what chief information officer Steven Randich calls “a serious look” at machine learning to better automate its surveillance of more than 50 billion orders, quotes and other market events each day. FINRA intends to prove the concept in the first half of 2017 by focusing on a single task: weeding out incidences of an illegal price-manipulation technique known as layering. “We are about six months away from this being the real deal,” Randich says.

AlphaSense CEO Kokko concurs about staying focused: “We go very deep into the problem we are solving.” Says his partner and chief technology officer Neervannan, “It is a process of constant iteration – what works and what doesn’t – and knowing what to tweak.” •