This election year’s hard-fought presidential race has brought increasing focus on the credibility and methodology of polls that could have implications for politics, but also for business forecasting, according to Wharton faculty.


With growing uncertainty about the value of polls, new ways to predict election outcomes – including the use of aggregate poll results, expert opinion surveys and betting markets – are getting a closer look. “It’s interesting that there is so much attention paid to polls, but my guess is that polls are the least accurate way of gauging the election,” says Wharton marketing professor J. Scott Armstrong.


As part of an experiment in forecasting, Armstrong has created a web page tracking what he calls the Pollyvote, based on a parrot character named Polly. The parrot averages different ways of forecasting and comes up with her own predictions about the presidential race.


With two weeks to go to the election, Polly says President Bush will capture 51.8 % of the two-party vote – less than what the quantitative models suggest, but better than the odds given by a panel of experts and the Iowa Betting Market, a futures exchange that allows people to trade off projected election results.


Polly has been forecasting since March and her views are coming in closer synch with the polls. Armstrong says that is predictable. “As we get closer we expect things to converge and that’s happening. We have always expected the polls would be inaccurate further out, but as you get closer you expect them to be more valid.” Research shows that an aggregate of different methods of predicting outcomes reduces error, Armstrong explains. Under ideal conditions, combined forecasts are at times even more accurate than their most accurate components.


Armstrong has been working on Polly’s web page with two political scientists, but his goal for the project is to use election predictions as a hook to show corporate managers that they can build better business forecasts for their companies using an aggregate of inputs. “What I’m looking for is some way to demonstrate to management that the science has a payoff in real life,” says Armstrong. The Blue Chip Economic Indicators works on a similar consensus theory, he points out, but few business people have adopted the method when building their own internal forecasts.


The Polly page includes a measure called the Delphi survey, a consensus of 16 experts published periodically and melded into the Pollyvote. “We suspect that 16 experts will be more accurate than 1,000 voters being interviewed,” says Armstrong.


He tends Polly with two other academics, Alfred G. Cuzan, professor of political science at the University of West Florida, and Randall Jones, professor of political science at the University of Central Oklahoma.


Like Polly, a parrot who is only capable of repeating what she hears, the professors are trying to keep their own leanings off the site. The three have agreed not to share whom they are planning to vote for. “The idea is that Polly has no opinion. She hardly has a brain,” says Armstrong. “She’s the perfect one to report back on what people are saying.” The experiment will continue after the election, and he and his partners will evaluate which methods were more successful. “We think the techniques are useful in business forecasting and will be useful in other elections around the world and in other voter issues in the states. There are many applications.”


The biggest problem with most forecasting is that people rely too heavily on their own judgment and experience, he says, adding that in many cases people turn to polls or other forms of forecasting merely to support the view they have already formulated.


Criticism of polling is nothing new, according to Frank Newport, editor-in-chief of The Gallup Poll. “We’ve seen polls become matters of controversy in each election, in some more than others. It may be accelerated this year because this election is particularly intense. Our data show people feel strongly – on either side – about this election and because it is so close there may be intense sensitivity to all kinds of information.”


Newport says pollsters have been struggling against declining response rates for a number of years, in part because of new technology. Caller-id, cell phone-only homes and a general reluctance by potential respondents to answer any calls after years of intrusion by telemarketers are all obstacles to an accurate, randomized poll. “The work has become more challenging, no question about that. So far we think we are able to do good, valid, projectable polls and we continue to monitor these changes.”


Yet the rise of quickie opinion polls does concern survey research professionals, he adds. “It’s important that we make sure people understand legitimate polls are random – where we select people randomly.” Instant call-in polls may be interesting, but there is no scientific validity, Newport says. “Television likes them from a marketing standpoint because it gets viewers involved.”


Armstrong describes attending a corporate meeting in Bangkok several years ago during which company leaders projected that a new product would drive sales up 20%. Armstrong asked each of the people in the room to make their own projections. No one forecast a sales increase of more than 5%. “If you want to know the sales of a new product, have everybody in the office bet on what the sales will be and use those results,” he suggests. “The results produced in a betting market will be more effective than a traditional meeting where everybody is listening to what the boss thinks.”


Red Sox vs. Yankees

Betting markets historically have been used to predict elections and are emerging as a new form of forecasting, according to Wharton professor of business and public policy Justin Wolfers. He examined various methods of predicting the outcome of the 2001 federal elections in Australia and found that polls were fairly accurate predictors of election results in a short-to medium time frame of about six months. Economic models worked better for a longer horizon. Betting markets, he found, “not only correctly forecast the election outcome, but also provided very precise estimates of outcomes” in a host of individual electorates. “Particularly in marginal seats,” Wolfers writes, “the press may have better served its readers by reporting betting odds than by conducting polls.”


Betting markets act as a way of distilling masses of information, the same way that markets compress information to determine prices, Wolfers explains. He acknowledges that perhaps some people would pay a price to root for their chosen candidate by casting their lot with their favorites in the betting markets. For example, he admits that he placed some emotionally-driven playoff bets on the Red Sox against the Yankees in this year’s baseball playoffs.


“You might think guys like me are crazy, but the question is whether the odds are wrong as a result of people like me,” says Wolfers. “This is where you need to think of a betting market more like the stock market. Guys like me might buy [the stock of] a particular firm for any reason, but guys like Warren Buffett care only about profits. If Red Sox fans bid up the price, then Buffett will buy Yankee stock until it gets to equilibrium.”


Wolfers is working with, an Irish Internet betting site, to examine how betting markets function when they are asked to determine conditional outcomes. The site offers wagers on a Bush victory given several scenarios, including a red homeland security alert in place on Nov. 2, or a situation where Osama bin Laden is “neutralized” by Nov 2. “This is a new form of market which allows us to get a handle on a market’s conditional expectations, or what the market says is the correlation between the two events,” Wolfers says. “We can get the expectation without having the event occur. We can have serious punditry where people are putting their money where their mouth is. That’s rare on Sunday mornings. There is a lot of cheap talk on the talk shows.”


Polling and the Information Age

Indeed, betting odds were published routinely as election forecasts before they were displaced by polls in the 19th century, says Wharton statistics professor Abraham Wyner.


Pollsters have become quite good at designing accurate polls, he adds, but it is expensive to do it right. Large polling groups, like The Gallup Organization and national television networks, continue to do a good job. At the same time, a new wave of polls that are cheap and non-scientific has emerged. Wyner calls them “samples of convenience … The science of polling has reached its state-of-the-art. We know what needs to be done; the problem is human beings get in the way of that.”


According to Wyner, new information technology has made it easier to conduct polls that reach many people faster, but that can lead to results that are not as accurate as a well-designed, smaller sampling. “The question that needs to be determined is how the information age will affect polling. Can you use a computer or a cell phone to do an effective poll?”


Many instant Internet and television polls are meaningless, he notes. “A television station can say, ‘Vote on our web site, or use your cell phone to call this number for free.’ These kinds of high-tech polls are cheap to do, but they are as valid as making up numbers. They are nonsense.” The worst part about that kind of poll, adds Wyner, is the disclaimer about its scientific validity. “They should just say, ‘We make this up.’”


New forms of communication have amplified the impact of polls, he points out. “Weblogs are constantly referring to the different polls. There are all kinds of discussions in real-time and high-profile chats going on about all this information. People are connecting in a way they never could before and information is proliferating in a way that it never could before. The polling organizations are now just a cog in a very big network that includes bloggers, individual bettors, and newspapers. It’s all part of a big system. It’s not simply the polls anymore.”