As predictive analytics become more sophisticated, companies are increasingly relying on aggregated data to help them with everything from marketing to new product lines. But how much should firms trust the wisdom of the crowd? In his latest research, Wharton marketing professor John McCoy proposes a new solution for crowdsourcing that can help create better, more accurate results: Instead of going with the most popular answer to a question, choose the answer that is “surprisingly popular.” His paper, which was jointly written with Drazen Prelec and H. Sebastian Seung, is titled, “A Solution to the Single-question Crowd Wisdom Problem” and was published in Nature. He spoke to Knowledge at Wharton about why there’s plenty of wisdom in the crowd for those willing to ask the right questions.
An edited transcript of the conversation follows.
Knowledge at Wharton: The power of the crowd to make predictions or recommendations has gained wide acknowledgement in the past couple of years. Can you talk about how this has developed, where it’s being used and what some of the limitations are?
John McCoy: Many companies are using internal prediction markets to try and get a handle on good ways of drawing on the wisdom of all their employees, for instance. Government agencies are using the crowd to make good economic or geopolitical forecasts. I’m not sure that there are limitations to using the wisdom of the crowd, per se. I think the limitations are in some of the current methods for extracting wisdom from the crowd. For instance, many of the current methods, before we did our work, assume that often the majority is correct or assume that it’s easy to tell almost immediately who in the crowd is an expert. In fact, that’s often not the case.
Knowledge at Wharton: Your research proposes a new way of using the wisdom of the crowd: Instead of using the most popular answer, use what you call the “surprisingly popular” answer. Can you explain?
“You want to ask the crowd for two bits of information — not just for their own answer, but also for their predictions about the answers of other people in the crowd.”
McCoy: If you think about doing something like majority vote, what you’re doing is just taking the most popular answer. You’re taking the most frequent answer that people give. We say instead that the right thing to do is take what we call the surprisingly popular answer. The idea is that you want to ask the crowd for two bits of information — not just for their own answer, but also for their predictions about the answers of other people in the crowd. Then, taking the surprisingly popular answer means looking at both the actual vote frequency and the predictive vote frequency, and choosing the answer where the actual vote frequency exceeds predictive vote frequency.
I can give an example. Consider a simple factual question like: Is Philadelphia the capital of Pennsylvania? The correct answer is no. The capital is Harrisburg. But many people think it is, because Philadelphia is a large, populous city. Most people know about Philadelphia. When you ask that question to a crowd of people, as we did with MIT students, only about a third of the crowd gets the correct answer. We can also, though, look at the crowd’s predictions about what people in the crowd will do. If you ask everybody in the crowd to predict what fraction of people will answer no, the crowd thinks that only 23% of people will answer no. So, our method says to select no, the correct answer, because even though it’s not the most popular — only 33% of people endorsed it — it’s the most surprisingly popular answer. The 33% is far more than the prediction of 23%.
Knowledge at Wharton: How would companies have to adapt what they’re doing to adhere to this?
McCoy: Some companies at the moment are doing things like simple majority votes or weighting votes by competence. Other companies are using things like prediction markets. So, here’s a very simple method where you just ask said group of employees for their own answers to some questions — some market forecast, say — and you ask them to make predictions about their colleagues. Then you simply combine these two pieces of information.
One of the nice things [about this method] is that it’s got far greater scope than a lot of the things that the companies are currently doing. If I’m a big company using prediction markets, I’m very limited with the kinds of questions that I can ask because, for instance, I’ve got to immediately be able to pay off people with respect to their answers — did this event happen or not happen? Did this product sell or not sell? With our method, you don’t need anything like that. All you need is answers, predictions, and you can immediately apply it.
Knowledge at Wharton: Whenever someone asks us a question, we’re not just trying to give the right answer or the answer that we think is best. We’re also trying to think about who is asking me, and what do they want to hear? It seems like this model brings that more into play.
“I think one big lesson is that the crowd is a lot smarter than many people give it credit for.”
McCoy: Yes, to some extent. Certainly, one of the nice things is that you can use the two inputs to this model — people’s own answers and their predictions about the answers of other people — to incentivize them in clever ways to tell the truth. By combining these two pieces of information, even if I don’t know the right answer, I can also incentivize people to give what they see as the true answer without them taking into account these other factors you’re talking about.
Knowledge at Wharton: What lessons are there for industries that want to incorporate more crowd wisdom into whatever they’re doing?
McCoy: I think one big lesson is that the crowd is a lot smarter than many people give it credit for. As you said earlier, there’s been increasing interest in the wisdom of the crowd, but our method says there are a whole bunch of questions that you would normally have thought the crowd would get wrong, because the majority gets it wrong. But with smart, sophisticated methods, even in those cases the crowd often knows the correct answer. Part of the takeaway for these industries is, wow, your employees actually know a lot if you put them together — and if you use methods like ours, you can extract their wisdom.
Knowledge at Wharton: What are some future lines for your research?
McCoy: I’m doing a lot of work to follow up on the idea of the surprisingly popular answer. One is applying it in other contexts. In a lot of the original work, we assumed that there were one, two, three answers to a question. Now, I’m really interested in applying it to continuous quantities — so can you predict things like, what’s the price of gold going to be? I’m also interested in applying this [concept] to predicting product purchases. I can ask you, “Do you like this particular product? Do you think your friends will buy it?” Then, we can use those two pieces of information together to do a better job of predicting what products will sell. I’m also interested in the psychology of how people are actually making these answers and these predictions — I’m doing work in the lab trying to get at the mechanisms behind this.