Page 3 of 3

Yet another Bendheim economist, South Korean game theorist Hyun Song Shin, tackled some of those spillover issues in Oxford University’s 2008 Clarendon lectures, published as “Risk and Liquidity” in 2010. The Oxford-trained Shin, an adviser to the South Korean government, delved into the power of market prices to not only signal information but induce action. Much of this action in a mark-to-market world, he asserts, is “pro-cyclical,” not unlike leverage, creating herding, illiquidity, feedback and amplification, transmitted through interconnections.

How does someone capture those often opaque interconnections? “What we try to measure is how much your firm may be affected if my share price tanks,” says Brunnermeier. “I don’t have to know all of the underlying links. It’s a top-down approach.” He and the Fed’s Adrian take historical data from bubble periods — they only have weekly data back to 1986 — and try to get some sense of how firms with similar underlying characteristics are moving in a coordinated fashion. The key is to develop a useful set of attributes.

“What we’re saying is, what is the danger to the financial system if Lehman Brothers goes down?” Brunnermeier explains. “We can also turn it around: If the system is in trouble, how will Lehman fare?”

“We are still fine-tuning,” he adds in his clipped, brisk English. “We’re searching for the right characteristics — maturity mismatches, size, leverage, liquidity risk. If you see leverage building up, that’s a sign.”

Although “CoVaR” has attracted attention — a flock of new papers has tackled aspects of it — relevant data is patchy, the metrics themselves are indirect, and lots of work still needs to be done. Regulators are working with other approaches that seek the same macroprudential goal. “It’s actually good to look at [the problem] from different angles,” says Brunnermeier. One tool, the “distress insurance premium,” developed by Xin Huang, Hao Zhou and Haibin Zhu from, respectively, the University of Oklahoma, the Fed and the Bank for International Settlements, uses credit default swaps to indicate firm vulnerability. But that method is limited because CDS data is available for only one crisis — 2008. A third measure, the “systemic expected shortfall,” advanced by a group of NYU economists, examines how firm capital will be affected by a decline in equity prices.

In fact, there are dozens of so-called risk indexes under development. NYU’s Leonard N. Stern School of Business has started something called the V-Lab (for Volatility Laboratory), run by Nobel laureate Robert Engle, which provides online a variety of daily measures of various risks, including systemic risk, listed by both country and financial institution.

Will any of these methods sound the alarm? Will anyone respond? We’ll only find out in a live crisis. So far, there are limitations on these predictive forays that are not just theoretical and empirical but data-driven. Brunnermeier hopes better data will be generated, particularly with the involvement of the Office of Financial Research. He has worked on a liquidity mismatch index that indicates how easy — or difficult — it is to sell assets off in a crisis. But the index relies on flow-of-funds data, a measure that has grown more complex and opaque with the diffusion of derivatives.

Meanwhile, there’s another aspect that spillover analysis helps illuminate: Different kinds of bubbles vary in their costs and benefits. Princeton’s Harrison Hong and David Sraer, for instance, have developed a model that tries to distinguish “loud” equity bubbles — characterized by high prices, volatility and trading volume — from “quiet” bubbles, such as credit bubbles, with more-subdued pricing, volatility and volume. Noise has nothing to do with threat. Generally, equity bubbles tend to cause direct losses. Stealthy credit bubbles, with their thicket of interconnections, spawn more havoc.

In his recent book, Doing Capitalism in the Innovation Economy, longtime venture capitalist and  Warburg Pincus senior adviser William Janeway argues for the need to differentiate bubbles along two dimensions: “the object of speculation” — whether it’s a fundamental technology or a financial instrument — and “the locus of speculation,” whether confined to capital markets or spilling over into credit markets. Stamping out every suspected bubble may erase risk that produces reward, Janeway contends.

Where does that leave stewards of financial stability around the world? With broad generalizations, some traditional measures — leverage, liquidity — and a flock of systemic risk measures, like CoVaR, that may or may not prove effective. We know generally that equity bubbles in relative isolation are less systemically dangerous than credit bubbles, which drag down interconnected banks. But you still have to be aware of an equity crash that takes down banks, like 1929. We know, as Janeway lays out, that bubbles that seem to focus on a technology may be worth letting ride. We also know that some bubbles may be fueled by disagreement and that leverage and unusually high volume and volatility are dangers. Last, we know that the key evidence of excessive asset swings comes from a comparison with historical benchmarks.

What don’t we know? A lot, starting with timing. If we don’t know when a bubble will burst or when the most reasonable time is to dampen it, we need to act carefully. In that sense, as one Fed official notes, stability monitoring resembles monetary policy. It’s less a science than a data-driven art.

Eventually, we will understand more, we will have more data, and we will do more testing — though public trial and error spawns its own damaging spillover. But all the data in the world cannot ensure certainty. What we can’t calculate is the judgment and will of decision makers. Will they be prepared to yank away the punch bowl, and do they possess the wisdom to discriminate among assets, even with Wall Street or a politically polarized public claiming that “this time it’s different”? Will they be willing to act even if they’re coping with the effects of their own policies? Data, metrics, models and analytics can help, but they may be ambiguous and can’t make the tough decisions. As Frydman and Goldberg might say, there’s no algorithm. And what lurks if we get it right a few times? Moral hazard.

Bubbles resemble yet another metaphor, the black hole: still poorly understood but such an integral part of the market universe that Janeway titled one chapter of his recent book “The Banality of Bubbles,” followed by one called “Explaining Bubbles” and a third dubbed “The Necessity of Bubbles.”

Bubble fears are now part of the public consciousness. Like it or not, our monitors of financial stability operate in a system shaped by popular sentiments that begin with the belief that markets naturally rise — at best, a frothy notion. Bubbles, or asset swings, challenge that thinking. But it’s a foreboding sign when we can’t agree on what “rational” means. As Janeway writes, “In this term rational there is a nexus of confusion that infects both academic and popular discussions of how economic and financial agents think and act.”  The idea that bubbles represent a decline of virtue and that everything that occurred during bubble years is a lie continues. The concept of the “good” bubble remains counterintuitive. The belief that the future is predictable lingers despite contrary evidence.

Yes, it’s a new era. • •

Single Page    1 | 2 | 3