What are things worth?


The answer seems simple enough. Just look around the marketplace to see what similar items are selling for. But what if your house has a pool, while the one that sold next door doesn’t? Unless you are dealing with an item with exact duplicates that are bought and sold every day, like stock in a publicly traded company, it’s hard to know just what your item is worth.


It’s a devilish problem in the business world, where companies need to account for the fast-changing values of complex financial instruments — from insurance policies to employee stock options to exotic derivatives — for which there is no ready sales history. Yet accounting standards are tightening, requiring that businesses justify valuations rather than simply use their best guess or original purchase price, as they did in the past. So firms are turning to ever more complicated financial models that attempt to deduce values using an array of indicators.


“What you’re trying to figure out is: What if you had to sell [an asset] in the market? What would somebody be willing to pay?” said Wharton finance professor Richard J. Herring. “People are trading on the basis of these [models], but it is difficult, because they are extremely complex, and regulators are worried that they can be pretty easily manipulated.”


This dilemma was the topic of the Tenth Annual Wharton/Oliver Wyman Institute Risk Roundtable held May 31-June 1 and sponsored by The Wharton Financial Institutions Center and Oliver Wyman Institute. The Roundtable was hosted by Herring, the Center’s co-director.


International and U.S. accounting bodies are strengthening rules on how to place “fair value” on hard-to-price assets. Last September, for example, the Financial Accounting Standards Board (FASB) in the U.S. adopted Statement 157 which requires that, whenever possible, companies rely on market data rather than their own internal assumptions to value assets.


But some critics argue that computerized valuation models rely on assumptions so uncertain that the results should merely be noted in financial statements rather than included in tallies of assets and liabilities, as FASB requires. The new rules take effect with financial statements for fiscal years beginning after November 15, 2007. “Fair values are unverifiable…. Any model is an opinion embodying many judgments,” said critic Mark Carey, finance project manager for the Federal Reserve Board, during remarks at the conference.


While conceding that the Fed had “lost the battle” to minimize use of fair value accounting, he warned that allowing firms to set up their own valuation models, rather than relying on standardized ones, invites trouble. “The problem is fraud,” he noted. “The reason the Fed is concerned about this is because we are worried about the state of a world in which a firm wants to conceal its insolvency. That’s fairly easy to do in a fair value system.”


Insuring Against Catastrophe


Insurance is one field that is using more elaborate models to calculate risks, set policy prices and figure the current value of policies issued in the past, according to panelist Jay Fishman, chairman and CEO of The Travelers Companies. “Catastrophe modeling,” for example, forecasts the likelihood of earthquakes, terrorism and other events that result in claims.


In his presentation, “Insuring against Catastrophes: The Central Role of Models,” Fishman noted that insurers previously assessed catastrophe risks by analyzing past events. Typically, they figured average hurricane losses on a statewide basis, not accounting for greater damage in coastal areas and failing to properly estimate the greater damage an unusually large hurricane could cause. Before Hurricane Andrew struck the U.S. in 1992, the most damaging hurricane was Hugo in 1989. Hugo cost insurers $6.8 billion, while Andrew cost them $22 billion and left a dozen insurers insolvent.


New catastrophe models are far more complex, Fishman said, because they add data on likely storm paths predicted by scientists; the types of construction, ages and heights of buildings along those paths; the value of insurance issued; policy limits; deductibles, and other factors bearing on losses. In addition, insurers now consider changes in the frequency of big storms caused by factors like rising sea temperatures from global warming.


With guidance from these more sophisticated models, Travelers has raised deductibles for wind damage, tightened its coverage for business interruption and changed premiums to reflect a better understanding of risk, according to Fishman, who adds, however, that models have limits. They are not good, for example, at accounting for long cycles in weather patterns, nor can they forecast claims when events are bigger than expected. Hurricane Katrina, for example, caused more damage inland than the models had forecast, he said.


Softening the Jolts


Similar shortcomings are found in models used in other industries, causing debate about how models should be constructed. Financial institutions have trouble, for example, tracking daily changes in values of credit default swaps, collateralized mortgage obligations, over-the-counter options, thinly traded bonds and other securities for which there is no liquid, transparent market.


It’s not uncommon, said Herring, for a large financial institution to have 2000 valuation models for different instruments. And the penalties for getting the results wrong can be severe, as investors learned in the Enron and Long-Term Capital Management debacles, or with the recent financial restatements by Fannie Mae.


The problem has recently been highlighted by the fallout from the subprime mortgage lending binge of the past few years. These loans typically were bundled together and sold to investors as a form of bond. Now, rising interest rates increase the likelihood that some homeowners will fall behind on their payments, undermining the bonds’ values. But the models cannot account for these factors very well because subprime mortgages are so new that there is little historical data. Amidst this uncertainty, financial institutions are hustling to protect themselves, and consumers may find it harder to get loans as a result. Better modeling could soften these jolts.


Though valuation models must be customized for every instrument, they should share some underlying principles, said Thomas J. Linsmeier, a FASB member, noting that the goal of Statement 157 is to arrive at a price that would be received if the asset were sold in an “orderly transaction” — in other words, not in a crisis or “fire sale.”


Many financial assets are so highly customized that there are no comparable sales. Even when there are, many sales are private transactions that do not produce data for others to use as examples, he said. In these cases, the asset’s owner should try to determine what should be considered the “principal market” in which the asset would be bought and sold, so that data from smaller, less representative markets can be screened out to reduce confusion. “For many financial instruments there are many, many markets in which you might exchange those items…,” he noted. “If there is a principal market, let’s use that … rather than using all possible markets.”


When there is no data on sales of comparable assets, firms should turn to market prices for similar assets, Linsmeier suggested. When that is not available either, firms must rely on their own internal estimates. But those should be based on the same assumptions an outside buyer would use, rather than on the firm’s own assumptions, which might be biased to make its accounts look better, he said, adding that, generally, any data obtained from the marketplace is preferred over internal company estimates.


Biases and Stock Options


The problem of internal firm biases influencing accounting is illustrated by the recent debate over whether companies should count stock options issued to executives and other employees as an expense.


While economists generally agreed that options are a cost of business that should be counted as an expense, many business groups opposed the move, noted Chester Spatt, chief economist at the Securities and Exchange Commission. Expensing opponents argued it was not possible to accurately value options years before they could be exercised, because their future value would depend on the company’s stock price at the time.


“It seems surprising that companies that apparently don’t understand the cost of a compensation tool would be inclined to use it to such an extent,” Spatt said, suggesting that companies do, in fact, know the value of their options grants but don’t want to reveal the cost to shareholders who might think executives are overpaid. Proper accounting would discourage companies from issuing too many options, he noted.


Markets have long used modeling to place present values on assets whose future values will fluctuate with market conditions, Spatt added. Traders, for example, use models to value collateralized mortgage obligations whose future value will depend on changing interest rates and homeowners’ default rates.


Though modeling has been around for many years and appears to be getting better, even those who design models concede they have flaws. “I think there is a lot more need for research and discussion of approaches for measuring model risk,” said panelist Darryll Hendricks, managing director and global head of quantitative risk control for UBS Investment Bank. Oftentimes, assumptions used in models turn out wrong, he pointed out. A common model input for valuing stock options, for example, is the expected price volatility of the stock. But future volatility may be very different from the past patterns used in the assumption.


To make its models as good as possible, a firm should have a controlled, disciplined way of field testing them before introduction, and it should continually evaluate a model during the period it is used, Hendricks said. UBS discusses its models’ performance during monthly meetings among the traders who use them.


While modeling will continue to be controversial, Herring thinks it will keep getting better. He predicts firms will increasingly share data on their proprietary models, and he thinks model users will gradually adopt better standards for validating their models — making sure, for example, that evaluations are done by disinterested outsiders rather than the model designers themselves.


Advances in computing power and financial analysis have led to a mushrooming of new financial products in recent years, and should also help to improve the modeling used to measure those products’ values, Herring noted. “All of this has made it possible to produce these new products and models. But it also means a lot more is riding on getting the models right.”