Damian Handzy earned his doctorate in nuclear physics, but he says financial markets are far more difficult to understand than protons. Handzy, 46, is co-founder, chair and CEO of Investor Analytics, a New York firm established in 1999 to provide risk management software and services to investment firms. Since then Handzy has seen a great deal of evolution in the field of risk management, particularly after the financial crisis of 2008 and its attendant liquidity shocks caught many asset managers flat-footed.
In the wake of the crisis, he says, investment firms significantly improved their approach to managing risk, a move that has served them well amid the recent wobbles in markets around the world. But the sheer complexity of the global financial system means that there is still some way to go before market participants can truly begin to understand, and therefore measure, the dangers lurking in the shadows.
1. You spend a lot of time talking to fund managers. What is the No. 1 risk they are focused on right now?
The No. 1 topic is liquidity risk, and there are different types. Funds of funds are tied up with their managers — that’s the client-manager liquidity risk. Then fund managers also have two different types of risk: They have the liquidity of their underlying investments, and they have their investors’ liquidity terms they have to manage.
Then you have liquid alternative funds. Liquid alts that are replicating a hedge-fund-like return through liquid markets face the same kind of risk that a typical hedge fund would face. There is still a risk that currently liquid markets become less liquid; there are technical hiccups where the market shuts down and there are five hours of illiquidity. These things do happen, and that risk exists whether it’s a liquid alt fund or it’s a mutual fund.
The special risk that a subset of liquid alt funds face is that they have given their investors preferred liquidity terms as compared with their underlying investments. That’s an incredibly dangerous position to be in. When liquidity dries up, it’s a very fast event. It’s the proverbial picking up nickels in front of the steamroller. And this is a gotcha of quantitative risk management.
2. What are people doing differently today in risk management than before the 2008 financial crisis?
There is a lot more maturity in the risk management space than there was in 2007 and 2008. I’ve seen people probe much more deeply to validate, verify and see flaws in the ways that risk is being assessed. That’s extremely important and very healthy for the industry. Anyone who relies on one model to get one number deserves what they get. There is no one tool that everyone is using.
Today I see people doing a lot of different types of stress tests, with a lot more sophistication in the philosophy and less sophistication in the mathematics. They understand the models aren’t a crystal ball, and they understand that they have to look at five, ten, 15 different models to get what the real picture is. So I am thrilled with the increase in sophistication of the risk management philosophy and approach. I’m also thrilled that the models aren’t so micronuanced mathematically.
3. What do you mean?
People haven’t abandoned sophisticated models. What I mean is there is a risk among model aficionados to want to make mathematically elegant, theoretically sound models that use sophisticated math and sophisticated modeling techniques. The assumptions that one has to make in order to make models elegant and beautiful from a theorist’s perspective are often at odds with reality in the dirty marketplace. By relaxing some of the assumptions in the modeling, you by necessity can no longer have an elegant model, because the elegance requires certain assumptions.
Take [landmark options pricing model] Black-Scholes. It has a bunch of unrealistic assumptions going into the model — for example, that there are no transaction costs, that volatility is constant and that people have access to the same information immediately. None of those things is true, yet you must assume all three are true to be able to solve the Black-Scholes equation.
Over the past several years, the trend has been to continue using some of the very sophisticated models but also to introduce some very simplistic models, where you do not have these detailed assumptions and solve the problem exactly, but you relax some of those assumptions, use some coarser-grain models and solve the problem approximately. When you do that, you know your view is a little bit cloudier, but you are seeing more of the picture.
4. How would you assess the state of risk management practices today?
There is stuff we are good at and stuff we are not good at, and that’s the stuff that can get you. Contagion and shifting correlations are something that we are not good at. We are not good at it because the human brain is good at pattern recognition and it’s exceptionally good at recognizing patterns that it has experienced before. Human beings count the evidence supporting our beliefs more than we count the evidence that contradicts them. Neuroscience has proven this beyond a doubt. It affects all of us; it affects the risk manager. There is a natural tendency to concentrate only on those risks that we have seen previously.
We as an industry have done a good job of starting to chip away at that problem. I just had this conversation with a client who wanted to know, “What might get me that I’m not looking for?” I get those questions today. The existence of that question is awesome. It means we’re on the right track. When you are not even aware of the fact that you are biased in your approach, you have no hope of getting out of that bias.
5. What’s next for risk management? What new approaches and models do you find promising?
The thing I see that has the most immediate promise in financial risk management is machine learning. Call it AI if you want, but it’s basically using a structured, systematic approach to exploring the data through various statistical means to see what the data says.
In practice you start with an assumption about what’s going on underneath the data, and you let the computer modify your assumptions. It’s like Darwinian evolution — survival of the fittest. You start with something, the computer generates 50 mutations of your model, and it sees which of those random mutations work better; let’s start with that, then repeat the process. You keep going until you’ve got something that really describes the data at hand. When you’ve got this much data and can throw an iterative computer solution at it, and it improves itself iteration after iteration, it doesn’t take very long to get some useful tools out of it.
It’s very good when you think you roughly have the answer and you want to hone in on the answer. For example, it’s very good at identifying factor models. It’s also good at describing probability of tail events and how bad can bad be. So from a pure technique perspective, I think machine learning is the next big tool for risk management. This is going to become a pretty common term within a year or two on the Street.