Turn on the Internet, pick up your telephone or cell phone, read a newspaper or watch television: No matter what the communication vehicle is, polls and the reporting of poll results are ubiquitous:


“For our end-of-the-year Special Report on Air Travel in 2007 … eight questions about how your air travel experiences in 2007 stack up against those of 2006…”


Twenty-three percent of those questioned in a CNN/Opinion Research Corporation Poll released Thursday say that compared to other presidents in American history, President Bush is the worst ever…”


“What is your favorite car — the Lamborghini, Lotus, Aston Martin, Massarati, Ferrari…?”


When Wharton statistics professor Robert A. Stine reviewed the exotic car poll that his 11-year-old son had created for a sixth grade class assignment, the questions he raised in many ways reflect the major concerns that surround today’s polling landscape: Are the polls accurate? Scientific? Reliable? Can the questions be manipulated to get a particular answer?


As Stine noted to his son, most of the students he polled had simply picked the first name on the list, the Lamborghini. “When we looked at his counts, I asked him if he thought that the kids who answered his question knew what a Lamborghini was. He didn’t think so,” says Stine. “That’s when I suggested he should have had a follow-up question. He should have showed them all a picture of the cars in the list and asked them to identify the one that is their favorite. Somehow it seems that if you don’t recognize the picture of your ‘favorite,’ it’s not really your favorite.”


As Stine and his Wharton colleagues attest, the outcomes of both political and marketing polls — and whether or not the public trusts the results — are influenced by many factors, including polling technology, how the question is worded, the perception of who is asking the question, when and how the polling sample is drawn, and who agrees to take the poll (the responders) and who decides not to (the non-responders).


The Most and Least Reliable


When it comes to polls, not all are created equal.


The most reliable? “Surveys conducted by professional polling organizations on a periodic basis which repeatedly ask the same question — such as, ‘Do you intend to buy a car in the next three months?’ — are fully scientific and useful,” says J. Michael Steele, Wharton professor of statistics. “Even though we really don’t know what a person means when he says ‘yes,’ we can make hay out of the fact that last year, 15% said ‘yes’ and this year only 5% said ‘yes.'” An example of a polling company that fits this profile is the Gallup organization and the Gallup Poll, considered a leading barometer of public opinion.


What about polls that are potentially informative but nonetheless problematic when it comes to reliability? They’re out there, says Steele, in the guise of surveys that don’t ask repeat questions but are based on an honest probability sample. Their validity, he notes, “all depends on the craft of the question. Marketing firms do this to get honest answers for their commercial use. Politicians often want honest answers, but sometimes are fishing for a news item to plant.”


By far the worst kinds of polls, according to experts, are the Internet polls or magazine surveys that appeal to only those with a vested interest in the question. “They are worthless, except for the purpose of idle entertainment,” says Steele. “For example, magazine ‘fax back’ polls about the ‘right to choose’ almost always come back with tons of responses from the ‘right to life’ sector. They are junk.” Internet polls, he adds, “are crazy talk. There is no information at all, except perhaps in the number of people who respond. Such polls [do act] as a way to find hot topics in a target audience. This is good for magazines, but bad for readers. The only thing that’s slightly interesting is that they are a measurement of passionate responders.”


When it comes to comparing business or marketing polls with political polls, marketing polls are typically conducted in order to sell products, test market reception and consumer opinions, and identify customer preferences. Political polls, on the other hand, typically try to gauge voter preferences and opinions, and determine which candidates will win elections.


“Marketing polls have their own purposes,” says Richard Johnston, professor of political science and research director of the National Annenberg Election Study at the University of Pennsylvania. “Most critically, they are less concerned than political polls with representation of the total population; they are more concerned with representation of the demographics where profit is to be found. The [target audience] tends to be more urban and more highly educated, for example. My sense is that marketers are quicker to adopt Internet polling because the samples delivered by that means are relevant to their objectives.”


The Impact of Technology


Wharton marketing professor Peter S. Fader believes that the “work of polling overall is as vital as ever.” But he cautions that technology has had a negative impact on business marketing. “Technology has made it worse,” he says. “The ease of collecting data on the Internet is wonderful — it reduces costs and gives more flexibility in terms of time and questions — but it makes companies a lot sloppier. You used to have to test and re-test and re-test a survey to make sure you got the questions right. Now, you simply say, ‘Oh, we’ll just do it again.’ But for the most part, the battery of issues that people ask about — familiarity, preference, awareness, intentions, behavior — is largely the same. Though the methods may have changed, the basic intentions for the use of polls are about as stable as anything you can find in marketing.”


The cost of polling has always played a role in polling techniques. With the introduction of the telephone, pollsters began to prefer telephone surveys as opposed to the face-to-face interviews. According to Johnston, the costs range from $50 for a 30-minute telephone interview to $1,000 for a personal interview. But today the more economical telephone interview has its downsides, too. “For telephone surveys, the biggest problem is actually telemarketing,” Johnston says. “They have beaten up households so much that it has made it harder for telephone surveys to get more households to cooperate. The response rates have gone down, for all forms of surveys.”


Abraham J. Wyner, Wharton professor of statistics, notes that “while the value of a good answer is worth the cost to get it,” it’s becoming more expensive to do a proper poll at a time when “the population is indifferent.” For instance, telephone polls are considered among the most reliable when it comes to following statistical models and obtaining a scientific random sampling. “But in the era of Caller ID, many people choose not to answer their phones if they see an out-of-area number on their machines.”


Indeed, a study conducted by the Pew Research Center in 2004 found that “more Americans are refusing to participate in telephone polls than was the case six years ago,” due to a growing number of unsolicited telephone calls and because potential respondents “are armed with increasingly sophisticated technology for screening their calls.” A typical survey that employed standard techniques used by opinion polling organizations obtained “interviews with people in fewer than three-in-ten sampled households,” representing a decrease of about nine percentage points from the late 1990s, the study reported.


The American Association of Public Opinion Research, an organization of public opinion and survey research professionals, points out that the survey research typically conducted by political and marketing polls is not covered by the recent “Do Not Call” registry, which was established by the Federal Trade Commission in June 2003 to meet the requirement of the Do Not Call Implementation Act. The law made it illegal “for telemarketers to call consumers with whom they did not have a prior business relationship. The FTC exempted survey and opinion research because it is a critical part of making and monitoring policy decisions.”


While pollsters have always grappled with the person who refuses to answer questions after they pick up the phone, today’s pollsters are dealing with what Johnston calls “the silent refusal” — the person who just decides not to pick up the phone. And it’s a big problem. “Willingness to answer the phone, to a great extent, is quite independent of other characteristics, which includes being interested in politics,” says Johnston. “The kinds of people who answer political surveys now, compared to 40 years ago, are the more interested stratum. You got more apathetics, more marginally interested people 40 years ago than you do now.”


It’s unclear, however, what effect the “silent refusal” has on the ability of pollsters to obtain representative samples. The 2004 Pew study found that while polls face growing resistance and a drop in participation, “carefully conducted polls continue to obtain representative samples of the public and provide accurate data about the views and experiences of Americans….The decline in participation has not undermined the validity of most surveys conducted by reputable polling organizations.”


Landlines vs. Cell Phones


But Johnston remains concerned. “The coverage question is really worrying people,” he says. “Basically, how accurately can you identify the population of individuals from which you want to draw a sample? Telephone technology became the dominant way to do interviews as landlines became available to virtually everyone in America. But now it’s going in the other direction. There is a relationship between youth and lack of access to a landline. Pollsters are trying to calibrate what you are missing when you miss cell phones. The answer so far? When it comes to political polls, it’s not much. The cell phone users who do vote don’t look particularly different from those voters who have access to a landline. The telephone survey business is hanging in there, but everyone is worried.”


When it comes to cell phone usage and polling outcomes, research supports Johnston’s statement. According to Scott Keeter, director of survey research at the Pew Research Center, nearly 13% of U.S. households today cannot be reached by the typical telephone survey because they have only a cell phone and no landline — a figure that may approach 25% by the end of 2008 “if the current rate of increase is sustained,” Keeter notes in a 2007 Pew report titled, “The Landline-less are Different and Their Numbers are Growing Fast.”


But so far, Keeter concludes in the report, the impact of the cell-only phenomenon on polling results has been surprisingly minimal. Based on four studies conducted by Pew in 2006 that compared cell phone responders to landline responders, Keeter writes that “none of the measures would change by more than 2 percentage points when the cell-only respondents were blended into the landline sample. Thus, although cell-only respondents are different from landline respondents in important ways, they were neither numerous enough nor different enough on the questions we examined to produce a significant change in overall general population survey estimates when included with the landline samples and weighted according to U.S. Census parameters on basic demographic characteristics.”


It’s not illegal to conduct cell-phone surveys — just more difficult and more expensive than surveys conducted over landlines. Federal law prohibits the use of automated dialing devices when calling cell phones, so each number in the cell phone sample must be dialed manually. Writes Keeter: “The screening necessary to reach cell-only respondents among all of those reached on a cell phone greatly increases the effort needed to complete a given number of interviews. Pew estimates that interviewing a cell-only respondent costs approximately four to five times as much as a landline respondent.”


The Power of Prediction Markets


The rapidly changing landscape of responders and related technology factors are two reasons why Justin Wolfers, Wharton professor of business and public policy, believes in the power of prediction, or betting, markets. Wolfers — who is associated with several prediction market sites such as InTrade.com or Tradesports.com, where participants buy and sell contracts on sports and potential political outcomes — argues that prediction markets are a more reliable outcome predictor than polls, for three reasons.


“First, by forcing you to ‘put your money where your mouth is,’ they yield truthful revelation of beliefs,” Wolfers notes in a paper on pricing political risks with prediction markets. “Second, markets provide profit opportunities for those willing to gather new information that helps predict the future. And third, markets aggregate information dispersed across many traders.”


“You are not asking who they will vote for, but who they think will win,” says Wolfers. “The evidence is overwhelming that prediction markets provide a more accurate prediction than polls. On average, the final forecast from a Gallup poll is within about 2.25 percentage points, and the average for prediction markets is 1.5 percentage points.”


He points out that “the idea of betting on presidential elections is not new at all. Betting on elections has been going on for the last 100 years. If you read The New York Times from the turn of the century, they will report what is in the prediction markets — called ‘betting markets’ back then — and not polls, which hadn’t yet been invented. But since 1940, the elections have been dominated by polls.”


Wolfers predicts that “within a few years and a couple of election cycles, we will be back to tracking political markets through the lens of prediction markets instead of polls. In fact, in the last few election cycles, we have seen political commentators talking more and more about the race in light of prediction markets.”


Donald F. Kettl, director of the Fels Institute of Government and professor of political science at the University of Pennsylvania, notes that the influence of prediction markets and traditional polling has already had a tremendous impact on the 2008 election. “The first thing that’s so obvious is that the horse race issues are even more pronounced than before. There is no one in the lead in the Republican race, and the polls keep underlining that Hillary Clinton has things all wrapped up in the Democratic race. So far, the polls have played a tremendous roll in the way the campaigns have played out.”


When this happens early in a campaign, Kettl argues that it changes not only the perception of the public but the amount of attention they pay to the issues. “People like to pay more attention when there is a tough campaign battle. This is part of the issue. It makes it that much harder for those who are ranked second, third and fourth to get into the battle if it appears that there is a done deal. You can see it especially in the Barack Obama campaign. [It’s difficult] for him to break through and establish himself as a strong national candidate, [because] Clinton has been in charge of the polls for so long….You have what amounts to a feedback loop — one that tracks back to candidates and the public very quickly, where the perception becomes reality and reality becomes perception.”


Kettl agrees with others that in 2008, “it is getting harder and harder to do a good poll. It’s a tricky operation to figure out how to capture everybody. The basic problem of getting a good random sample of the population in order to estimate what is going to happen when people actually turn out to vote is even harder. It’s this growing collection of difficult problems that are driving pollsters crazy.”


Kettl predicts that pollsters will have to eventually turn to both web-based and telephone-based survey methodologies. “I think that probably for the next 18 months, pollsters are going to try hard to refine the current processes and methods and find a way to crack the code. Estimating who is most likely to turn out to vote — that is the key. That’s what separates the good pollsters from the bad.”


Johnston agrees. “Even though many find it annoying, the web may be the future of serious polling,” he says. “With the 2008 (Annenberg Election) study, we want to do some controlled comparisons between the web and the phone.”


Making Sausage


When Stine considers the ultimate concern — “Can we trust polls?” — he acknowledges that this is a very tough question. “How would you know? Trust it for what? To get a pulse of what the country thinks? I don’t know. It’s like the Talking Heads who describe why the stock market went up or down. Today, I can always describe ways in that it went down yesterday. But whether it is true or not is a much more subtle issue.”


Steele remains equally skeptical. His bottom line on polls? “I refuse to answer any. I just tell them I’m a statistician. Do polls affect policy? Who knows how policy is really made. I’m told it resembles the manufacture of sausages. The thing to think about with polls is the confirmation bias, which is basically that we are always looking for more evidence for the things we already believe in. If you are told as a child that ‘elves cause rain, then every time it rains there is more evidence of elves.’ We all end up with some set of opinions, but once we have them, we look for more evidence to reinforce them.”