INEFFICIENT MARKETS - Model Failure

Banking regulators might want to reexamine their fondness for value-at-risk calculations.

THE CURRENT CREDIT CRUNCH is the first in which mathematical models have played an important role. Common flaws in risk models link the failure of rating agencies to predict the level of defaults on subprime mortgages, the freezing up of the market for complex debt securities and problems at several dozen hedge funds and a handful of banks, including the U.K.'s Northern Rock. We handed over the job of assessing financial risk to mathematicians and physicists -- and they failed.

The most popular of the quantitative risk management tools widely used by institutional investors and bankers is the value-at-risk (VaR) model. It seeks to estimate the maximum loss a portfolio of equities, loans or other securities is likely to experience over a given period. Since J.P. Morgan & Co. first developed it in the early 1990s, VaR has swept through the world of finance. It is endorsed in the Basel II banking regulations as a key tool for measuring risk and capital adequacy.

An account of how these models came to be adopted is provided in an excellent new book, Plight of the Fortune Tellers: Why We Need to Manage Financial Risk Differently, by Riccardo Rebonato, the global head of market risk and quantitative analysis at the Royal Bank of Scotland.

After financial markets were deregulated, many companies bypassed banks to borrow directly in the capital markets. In response, many banks started originating loans, turning them into securities and selling them.

Once banks no longer held their loans to maturity but instead owned large amounts of tradable securities, they needed to estimate the impact of market risk on their balance sheets. The VaR model, using statistical techniques to examine the historical correlations and volatilities of assets in a portfolio, crunches the numbers and, appealingly, generates a single figure for the maximum potential loss (expressed in dollar terms). The beauty of VaR is that it takes into account the fact that different securities respond in varying ways to the same event, calculates the co-dependence between the assets and estimates the overall volatility of a portfolio.

But quantitative risk models can’t meet bankers’ high expectations. “The starting point of any analysis based on statistical techniques is that the future should ‘look like’ the past,” writes Rebonato, but this can lead to gross errors in a constantly changing world. The models used by rating agencies to calculate default probabilities on subprime mortgages, for instance, assumed that U.S. home prices would continue to rise.

Then there is the question of how much relevant information is available. Rebonato cites an example of a bank employing a mere five years of data to measure the loss distribution on a portfolio to the 99.95th percentile. Such precision is absurd -- it suggests that a certain loss is unlikely to be exceeded more than once in 2,000 years. Initially, VaR models didn’t have such grandiose ambitions. They merely attempted to calculate loss distributions over short trading periods (a day or a month) with lower levels of confidence -- say, to the 90th or 99th percentile. What Rebonato calls “percentile inflation” occurred when regulators decided to use VaR to estimate the degree of risk in a bank’s loan portfolio over the course of a year.

VaR models assume that profits and losses on a portfolio have a given distribution from which probabilities can be inferred -- another dubious assumption. Normal or Gaussian distributions don’t exist in the world of finance, which is replete with feedback effects and “fat tails.” Thus, some argue that it’s impossible to make strong inferences from financial data alone, and the probabilities produced by the risk models are particularly unreliable. Correlations between assets are not fixed over time; during a crisis they can change rapidly. VaR models can even contribute to market instability. In a prize-winning essay (“Sending the Herd Off the Cliff Edge,” 2000), Avinash Persaud, now chairman of London-based Intelligence Capital, warned that the widespread adoption of similar risk models encourages herding by market participants.

In the early 1920s the influential American economist Frank Knight defined risk as a “measurable uncertainty.” The quantitative economists and financiers of recent years have acted as if Knight’s claim were proved. Their repeated failures demonstrate that risk remains intractable even to the most advanced mathematical techniques. The recent woes of risk models should force the banking regulators in Basel to return to the drawing board. They might consider how best to reintroduce a bit of old-fashioned judgment to the business of running banks and managing investment portfolios.

Edward Chancellor, an editor at Breakingviews.com, is the author of Devil Take the Hindmost, a history of financial speculation.

Related