12/15/2011 10:54 am ET Updated Feb 14, 2012

Credit Raters and the Conventional Wisdom

Why is the credit rating problem so difficult? Nearly everyone accepts the fact that failures of the big credit raters when it came to subprime mortgages was one of the major factors behind the financial crisis. Anyone who has been around Wall Street and the markets for more than a few years knows that subprime might have been a massive failure, but it hasn't been the only one; in fact, the credit rating agencies have never displayed any greater talent for prescience or market timing than risk managers and regulators, whether it was the implosions on Wall Street in 2008 or Enron and WorldCom, back into the decades of failure and breakdown that characterize the historical landscape of finance.

And yet there remain the expectations that credit raters should be able to see the future by analyzing the present. Much of this explains the contortions politicians, policymakers, pundits and regulators have put themselves through to try to explain how the credit rating process went wrong and what can be done to fix it. The primary critique involves conflict, bad faith and malfeasance. In this critique, subprime went off the rails because of the conflicted nature of the business: The raters sold their ratings to the very firms they covered, thus setting themselves up for Panglossian manipulation, à la Wall Street research. Moreover, there wasn't enough competition. The oligopoly of credit raters set up a race to the bottom; the firm that offered the easiest ratings got the prize.

The secondary critique is more basic: The raters are just dumb and lazy.

Now all, some or none of this may be true, though it's interesting that these explanations don't surface when the credit raters get things "right," however that's defined. John Gapper in last week's Financial Times points out that the credit raters have actually done pretty well in forecasting European sovereign debt problems -- well, enough to stir up the ire of European politicians and technocrats who reject any implicit criticism of a downgrade.

And, in fact, that's the real point here. Not only is the future that credit raters are supposed to forecast irremediably uncertain, but for long stretches of time raters simply have no reason, in terms of regulation, commerce or their own mental and career well-being, to buck the tide of conventional wisdom. The raters are the very embodiment of conventional wisdom: bureaucracies designed to mirror broad opinions and views of the marketplace and the times. They do this very well. Granted, sussing out current consensus views is easier than making accurate predictions about the future. But it's also safer, at least in most circumstances. And given the utility-like stature and size of these bureaucratic organizations -- and how could they not be given the sheer amount of analysis they must grind out? -- the gravitational pull toward the conventional wisdom was, is and always will be irresistible.

The credit raters are, in fact, a fascinating problem in democratic and market behavior. They sit uneasily, if stolidly, between those two phenomena, politics and the markets, which share powerful biases toward the conventional wisdom itself. (Let's go further: that pair, along with the media, defines the conventional wisdom.) Again, the motivating belief is that somehow financial analysis will allow raters -- or risk managers, regulators and value investors -- to peer through the opacity of the future and discern its shape. This cannot be done in large number over long stretches of time by any individual (even the esteemed Mr. Buffett, who missed the real estate bubble), not to say by markets or by hives of low-paid credit analysts in cubicles at Moody's or Standard & Poor's.

The future is always a guess (I can hear Jon Corzine singing that to Congress) and one that is inevitably going to go wrong now and then. In fact, this intense pressure to interpret not the "real" future but the future embedded in the conventional wisdom would be a great deal less if the raters didn't straddle those two worlds. But they do: They are blessed by regulators as necessary utilities, deeply entwined into markets, banks and the regulatory system. They are essential and influential; if they did not exist, they would be quickly reinvented with the same result. There is no escape. They are not free to make their "best" analytical decisions. They must always be aware of the political and commercial contexts. Or alter that: They are sensitive to the current wisdom without necessarily being aware of it. Just like most folks.

And so this is larger than just the poor, pilloried rating establishment. Consider the new plan, emerging from Dodd-Frank, to remove credit ratings from the process of setting bank capital standards. Instead, regulators will hand over to big banks standard algorithms that will use public information to determine risk levels. This resembles a kind of do-it-yourself ratings process. True, it removes "corrupt" raters from the game and makes the process more transparent to investors and the five voters who care.

But how good are these formulas? Where have they been hiding? Are they not manifestations of collective and current wisdom? What we will discover with all this is that these formulas can't predict the future any better than the raters, who undoubtedly use similar formulas already. If these analytical engines from regulators were so good, why didn't regulators have a clue in 2007 and 2008? My prediction: at some point when the sheer inadequacy of quantitative algorithms becomes obvious, regulators and politicians will toss all of this back to the credit raters. After all, one of the important, if furtive, purposes of credit raters is to give regulators and pols someone to blame when things go wrong.

Keynes saw all this many years ago, with his analogy of the markets as a beauty contest. What he did not pursue is that same process unfolds in democratic governance. Majority rule, like price discovery, is a mechanism not for accurately predicting an unknowable future, but for unearthing what most people think about the present. Viewed that way, the real problem with credit raters is not that they screw up, it's that they're used by everyone as a convenient bearer of responsibility.

Much of this is about size and scale. No banker or regulators wants to hire hordes to examine all those loans; they want raters to take up the burden, using formulas to extrapolate the whole from a few. No investor or trader can afford to analyze every position. No CFO or treasurer, particularly of a bank, a hedge fund or a large company, wants to judge every investment position and piece of collateral. No manager or investor of a collateralized vehicle can make judgments on every mortgage, loan or bond. And so they turn to sources that will make those judgments for them and shoulder the blame: the markets (meaning investors) and the raters. (Politically, the problem with blaming investors is that they're democratically sacrosanct, like blaming homeowners for the mortgage crisis. That leaves the credit raters.)

This is the fiction we live by. Viewed this way, there is no way to "fix" the credit raters, just as there is no way anyone in his right mind believes that the price of a stock today will last more than a few minutes. They are extensions of an imperfect democratic us. Instead, we take refuge in two unprovable beliefs: that a few unconflicted formulas will tell us the future (the American way) or that we don't really need to try at all because politics trumps markets (the European tendency). What we really need to do is rein in our expectations about our abilities to master information and, most importantly, to know the future.

Robert Teitelman is editor in chief of The Deal magazine.