Wharton’s Stefano Puntoni speaks with Wharton Business Daily on Sirius XM about his research on customers’ attitudes toward algorithms.

Customers feel good about a company when its representatives make decisions in their favor, such as approving their loan application or gold member status. But when an algorithm reaches the same favorable conclusion, those warm and fuzzy feelings tend to fade.

This surprising contradiction is revealed in a new paper that examines how customers react differently depending on whether a computer or a fellow human being decides their fate.

In the study, Wharton marketing professor Stefano Puntoni and his colleagues found that customers are happiest when they receive a positive decision from a person, less happy when the positive decision is made by an algorithm, and equally unhappy with both man and machine when the news is bad.

“What’s interesting is that if you talk to companies, they’ll often tell you that they’re reluctant to let algorithms make decisions because they are worried about what would happen to customers when things go wrong. But we don’t actually find that. The negative consequences of using algorithms for companies seem to be, in fact, when the news is good,” Puntoni said during an interview with Wharton Business Daily on SiriusXM.

The researchers believe the results can be explained through attribution theory, a psychology term that refers to how people translate their own experiences and perceptions to make sense of their place in the world. Simply put, people have a psychological need to feel good about themselves, and it helps to internalize a good decision and externalize a bad one. When a company representative greenlights a request, customers attribute that to their own exemplary behavior, social status, excellent credit score, or other value-adds to the firm. That’s harder to do when the decision-maker is a bot.

“The negative consequences of using algorithms for companies seem to be, in fact, when the news is good.”— Stefano Puntoni

“These decisions are diagnostic of some characteristic of ourselves,” Puntoni said. “People find it easier to internalize the good decision when the decision was made by a person. Now they get what they want, and it feels better to them that it was a human [deciding] than if it was an algorithm.”

Consumers externalize bad outcomes to protect their feelings of self-worth.

“When they get negative news, the story is different. Then we find that customers blame the decision-maker for why they did not get what they wanted,” Puntoni said. “In that case, they will do so no matter who or what made the decision. They just use different strategies to externalize the outcome.”

The paper, “Thumbs Up or Down: Consumer Reactions to Decisions by Algorithms Versus Humans,” was published in the August edition of the Journal of Marketing Research. The co-authors are Gizem Yalcin, marketing professor at the University of Texas at Austin; Sarah Lim, business administration professor at the University of Illinois Urbana-Champaign’s Gies College of Business; and Stijn M.J. van Osselaer, marketing professor at Cornell University’s Samuel Curtis Johnson Graduate School of Management. The authors also published a summary of their research in MIT Sloan Management Review.

“We find that customers blame the decision-maker for why they did not get what they wanted.”— Stefano Puntoni

A New Look at Old Behavior

The paper, which documents 10 separate studies that the professors used to test their theory, is novel in its approach. There’s already plenty of anecdotal and scientific evidence that customers have an aversion to algorithms. When given the choice, consumers don’t usually prefer chatbots to manage a service complaint and would rather not use software to elicit medical advice or predict stock prices.

But the professors wanted to know what happens when customers don’t have a choice. It’s the first paper to examine how the attitudes of customers are influenced by algorithmic versus human decision-making.

“Our research context is of managerial importance,” the authors wrote, noting that their findings go against the conventional belief that bots are bad for business. As companies increasingly deploy algorithms to streamline tasks, drive down costs, and increase efficiency, managers increasingly worry that using algorithms will alienate customers.

Puntoni also pointed out a bit of irony in the findings: “If you think about it for a second, algorithms are expected to be more objective and more unbiased than humans. So, if algorithms say you deserve it, maybe [that is] an even better inference you could make about yourself,” Puntoni said. “But we don’t find people thinking like that. They just react more positively to a human giving good news than an algorithm.”

Humanize the Bot

What can companies do to mitigate the negative consequences of algorithms? According to the study, one solution is to humanize the bot. Anthropomorphizing the algorithm to make it seem more like a person may leave customers feeling better about the outcome when they receive positive news.

The scholars tested this idea through one study in which participants were told they were submitting applications to a country club. Depending on the condition, the application was reviewed by a robot, a real person named Sam, or an algorithm depicting a cartoon man or woman named Sam. Although all the applications were accepted, the participants felt better about the club when dealing with the real person and worse when dealing with the bot. But their feelings about the person and the human-like algorithm were similarly positive.

The paper notes that many companies are already experimenting with strategies that combine both algorithms and human decision-making. But the authors contend that it isn’t enough to have employees merely observing these automated functions; representatives need to be actively involved if they want better customer feedback.

Puntoni also offered cautionary guidance to companies that rely on algorithms to perform human resources tasks such as shortlisting job candidates or judging performance. Deploying algorithms for those tasks can have repercussions that ripple across the company.

“In other work in progress, we find that people feel a bit alienated and objectified when an algorithm is put in place of deciding how good the employee is, and that may have consequences for the way the employee feels about the company and co-workers,” he said.