Listen to the podcast:
Fans of the hit TV comedy “The Jerry Seinfeld Show” may remember an episode in which Jerry’s friend George leaves his car parked at work so that the boss will think George is putting in long hours, even when he’s not.
The idea, of course, is that George’s apparent productivity will net him a better performance review and a higher raise or bonus.
Wharton professor Maurice Schweitzer would call George’s behavior “an attempt to invoke the input bias – the use of input information (in this case the false impression of long hours) to judge outcomes.
Yet business decisions are frequently made based on input information that is either biased or manipulated, according to Schweitzer and colleague Karen R. Chinander, a professor at Florida Atlantic University. They define input bias as “the systematic misuse of input information in judgments of outcome quality.” While the researchers note that the quality of a decision is often “positively related” to the quantity of the inputs used to make that decision, “the relationship between input quantity and output quality is not automatic. In many cases inputs are misused, misrepresented or even negatively related to outcome quality.”
The two researchers recently published a paper on this topic in the July 2003 issue of Organizational Behavior and Human Decision Processes. In their paper, titled “The Input Bias: The Misuse of Input Information in Judgments of Outcomes,” the authors report on the results of four experiments looking at the link between input information and judgments.
Blowing Up Sand
In the first experiment, 83 participants were asked to rate the quality of two video-taped presentations about an emerging technology. In the first part of the experiment, 41 participants were told that the person giving the first presentation – on electronic ink – had spent 8 hours and 34 minutes preparing his remarks, while the person offering the second presentation – on optical switches – had spent 37 minutes preparing.
In the second part of the experiment, 42 participants were told the opposite: that the optical switches presentation was based on eight-plus hours preparation and the electronic ink presentation on 37 minutes.
Participants were asked to rate the presentations on such factors as quality of information, quality of presentation skills and knowledge of the subject.
What we found, says Schweitzer, was that “the preparation time we gave participants significantly influenced their quality assessments. Participants exposed to the long preparation time rated the quality of the same presentation higher than participants exposed to the short preparation time.”
Not just somewhat higher, but significantly higher – by 21% for the first presentation and 12% for the second presentation, adds Schweitzer. The study also showed that the same pattern of results occurred “even among participants who believe input time should not and did not influence their judgment.”
In many settings, the researchers note, “irrelevant input measures, such as the amount of time an employee spends in the office, influence outcome assessments, such as performance reviews.” Another example: Some analysts use research and development expenditures as measures of a firm’s innovativeness, even though “differences in R&D expenditures across firms can have more to do with accounting practices and where the research is done than the amount of actual innovation” going on, says Schweitzer.
Often, he adds, it is “difficult to judge the quality of outcomes directly. For example, how do we gauge the quality of legal representation or the productivity of an employee over the course of a year? This information is frequently ambiguous, especially where it concerns decisions involving hiring, promotions, compensation and so forth.” And in many cases, says Schweitzer, “input information is indeed relevant. Consider the extra effort a person puts into an office task. That effort, even if it doesn’t bring results this year, could in fact bear fruit the following year. So you want to reward that type of initiative and encourage the employee to continue to work hard in the future.”
At the same time, it is important for managers to recognize the nature of the correlation between inputs and outcomes. For example, “during the first Gulf War, the military reported in great detail the quantity and weight of bombs that allied troops had dropped – input values that are easy to measure,” says Schweitzer. “But those bombs could have just been blowing up sand. We don’t know how loosely or tightly correlated the information is with the intended results.”
Schweitzer and Chinander add that while using input measures to determine outcome quality is appropriate when the relationship between inputs and outputs “is direct, consistent and unbiased,” in many cases, “those conditions do not hold. First, the relationship between inputs and outcomes is not always positive … For example, longer hospital stays are not always better” and can in fact lead to greater risks of infection.
“Second, the relationship between inputs and outcomes may be inconsistent across individual organizations,” for various reasons, including the fact that some individuals or companies may be more efficient than others. Lastly, some input measures can be purposefully manipulated to bias outcome assessment.”
The authors cite the National Bicycle Industrial Company which manufactures and delivers custom ordered bicycles three weeks after an order is placed, “even though it only takes them about three hours to make the bicycle.” The authors speculate that “the slow delivery time gives customers the sense that their customized product, which purportedly took a long time to produce, has greater value.”
The Fudge Test
In the researchers’ second experiment, 60 participants were asked to sample fudge that had been made with similar ingredients, although the ingredients had been “mixed and cooked differently.” Fudge A was made using expensive machinery and fudge B was made using inexpensive machinery. Fudge was chosen for this experiment because people eat and evaluate fudge based almost exclusively on its taste. “There are rarely other reasons – such as nutritional value – for consuming fudge,” the authors note. As in the first study, half the participants were told Fudge A was made using expensive machinery and Fudge B was made using inexpensive machinery; and the other half was told the opposite.
Again, the evaluations were “significantly influenced by the expense of the machinery involved in making the fudge” – i.e. those believing the fudge was made by the more expensive machines rated it higher in quality.
And again, most participants said they believed the expense of the machinery “was irrelevant to judging the quality of the fudge” despite their documented reliance on that data. “People may automatically associate high input quantities with high output quality – even when they recognize that input quantities should be irrelevant,” the authors note.
In the paper’s third study, the authors investigate whether the timing of input information matters. In this study participants viewed a video-taped presentation first, then were told about the preparation time, and then evaluated the presentation. As before, the researchers conducted two versions of the study, one based on the long preparation time, the other based on the shorter time.
As in the first two studies, ratings were significantly higher when preparation time was long than when it was short, suggesting that “even when input information is presented after the evaluation process should have concluded, people are still influenced by input quantity information.”
In their fourth experiment, Schweitzer and Chinander examine the link between input quantity and perceived outcome quality for very low quality outcomes. “We expect that when decision makers experience a very low quality outcome they will think more critically than they do when they experience a high quality outcome, and hence may disassociate high input quantities with good outcomes,” they write.
To test that hypothesis, the researchers asked participants to compare raspberry and lemon tea in one of two conditions – a high quality condition where one third cup of sugar was added to two quarts of each kind of tea, and a low quality condition where one tablespoon of salt and a half cup of lime juice was added to two quarts of each kind of tea.
Participants were told before their taste test that both samples of tea were “made with similar ingredients. However the ingredients were mixed and brewed differently.” Tea A was made using expensive machinery and Tea B was made using inexpensive machinery. The facts were reversed for the other two instances.
“Consistent with prior results, preference ratings in the high quality conditions were higher when the raspberry tea had a high input description,” the authors write. “The pattern of results, however, does not characterize ratings in the low quality conditions. In this case average ratings when the raspberry tea had a high input description were similar to the average ratings when the raspberry tea had a low input description.
“While input information significantly biased evaluations of high quality outcomes, this same information did not influence evaluations of very low quality outcomes … We believe low quality experiences heighten concern and motivate systematic information processing,” the authors note, adding that the judgments of decision makers who experience low quality outcomes, therefore, are less likely to be influenced by the input bias.
“Much of the way we process information and make judgments is automatic,” Schweitzer adds. “In our experiment, people judged the pleasant teas on ‘automatic pilot’ and ‘automatically’ used input information to inform their judgments of outcomes. When participants tasted the very unpleasant teas they came out of an automatic pilot mode and made a careful and systematic assessment.”
The implications of their findings for managers are significant, the authors suggest, because “outcome assessment is a key component of both individual and organizational decision making, and our results demonstrate that these assessments are likely to be systematically biased by the input quantities used – or purportedly used – to attain outcomes. In general, input measures are relatively easy to manipulate” and the misuse of input information to judge outcomes may be widespread. For example, the authors suggest that “managers often use input measures – e.g. the number of hours spent with a client – to assess productivity in a way that leads employees to make decisions that are not consistent with the firm’s underlying goals.”
Is it possible to de-bias decisions? “In other words, can I as a manager make sure I am rewarding the right employees and applying judgments that reflect my underlying goals rather than psychological processes that bias my judgment?” Schweitzer asks.
Not easily, he answers. “For many biases, just the mere knowledge of them isn’t good enough. Most people in our experiment knew the input information was irrelevant, but they still used this information when judging quality. You have to design measures and put processes in place so that you can carefully assess exactly what the outcomes are that you need. It is often up to senior management to formalize the processes necessary to make important decisions, such as by using blind reviews of outcome measures. Doing this can be difficult, slow and expensive.”