Wharton marketing professor Josh Eliashberg has a message for Hollywood: Get geeky.
The use of statistical analysis and computer models, Eliashberg says, can help managers in the movie industry understand why ratings on a given film will vary from country to country. Even more radically, they can lead to the better evaluation of scripts. And using these sorts of techniques, he insists, won’t dim the magic of the silver screen: Movie making can be both grounded in science and enlivened by art.
“Few executives in Hollywood dare to take the rigorous perspective that other industries have adopted,” Eliashberg says. “The movie industry struggles between the needs of business and the belief in art, and many of its top executives, if you look at their backgrounds, come mainly from the creative community.”
But the “creatives,” as they are known, need to listen to the bean counters because of the amount of money involved when studios win — or lose — big. The average movie costs about $60 million to make and another $30 million to $35 million to distribute and market, Eliashberg notes. And it typically loses money. The industry depends on blockbusters like Titanic –which grossed more than $600 million at the box office — to compensate for flops like The Adventures of Pluto Nash, which lost nearly $100 million. The setup makes wildcat oil-drilling look low risk.
In two recent research papers, Eliashberg offers up tools that could help movie producers and distributors achieve better returns on their investments. With Sam Hui, Wharton doctoral student, and John Zhang, Wharton marketing professor, he has devised a computer program for predicting which scripts will appeal to moviegoers and, with Mark Leenders of the University of Amsterdam, he has analyzed how rating boards in various countries assess films. Ratings matter because a less-restrictive rating translates to bigger sales at the box office and in the TV and DVD markets.
Both papers grew out of Eliashberg’s longtime fascination with films –“I’m a movie freak,” he says — and his earlier studies on movie-revenue prediction. Previously, he developed a model for determining how long theaters should run films, based on their early ticket sales, and later he devised one for predicting, before release, a movie’s revenue — based on its “rough cut,” the film equivalent of a rough draft.
“I was challenged on my earlier studies, and the challenge came from some movie producers,” he says. “They said, ‘What you have done is fine, but usually all we have is a script, and we have to make decisions on whether to make those into movies.'”
These days, to find promising scripts, producers ask professional readers to cull through thousands of them; more than 15,000 screenplays a year are registered with the Writers Guild of America alone. “Despite the huge amount of money at stake, this process, known as ‘green-lighting,’ is largely a guesswork based on experts’ experience and intuitions,” Eliashberg and his co-authors write.
Guessing wrong doesn’t just hurt financially like, say, when a big-budget movie — Gigli comes to mind — flops. It can also mean missed opportunities. “Even the scripts for highly successful movies, such as Star Wars and Raiders of the Lost Ark, were initially bounced around at several studios before Twentieth Century Fox and Paramount, respectively, agreed to green-light them,” the three scholars point out.
Eliashberg, Hui and Zhang’s computer model, explained in their paper titled, “From Storyline to Box Office: A New Approach for Green-Lighting Movie Scripts,” reads storylines electronically and analyzes script elements — like a clear premise and a sympathetic hero — that appeal to moviegoers. To create the program, the scholars relied on a technique called natural language processing. And they asked human raters to answer 22 yes-or-no questions about each film they evaluated. They fed those answers into their program as well.
They applied their model to the “spoilers” — that is, script summaries — of 281 movies released between 2001 and 2004. The median return on investment for these films was -27.2%. The scholars broke the films into two groups. They used the first 200 to calibrate their model and the remaining 81 to make predictions.
They found that their model could predict nearly two-thirds of the time whether a movie would perform better or worse than the median. “Although a correct classification rate of 61.7% is not very high in an absolute sense,” they write, “this is rather expected, as we do not use many other factors that are known to affect the final return after a movie production, including advertising and promotion effort, seasonal effects, screen numbers, competition, etc.”
From the 81, they pulled out the 30 that their model rated most highly. If the studios had made just these films, they would have seen return on investment of 5.1%, the scholars calculated. Granted, that’s hardly more than an investor could get currently on a U.S. Treasury bond. But it’s far better than either a random selection of 30 films from the group, which returned -18.6%, or a portfolio designed to replicate a typical studio’s choices, which returned -24.4%.
Eliashberg concedes that traditional movie moguls probably would balk at having a computer check off on scripts. “It would threaten them,” he says. “An objective way of evaluating scripts makes their stature more questionable. If there’s no substitute for my knowledge, then I’m in good shape.”
And he’s sure he will hear the objection that computer evaluation would lead to formulaic movies — although, given the popularity of sequels and trilogies in Tinseltown, it seems formulas already prevail there. He and his co-authors try to head off this criticism in their paper: “Rather than coming out with a set of rigid rules to follow, our approach will only suggest the structural regularities that a successful script generally possesses. We believe that there is room for creativity within the structural regularities.”
Their model might tell a studio which 20 movies out of a group of 100 look like they would appeal to moviegoers, but it wouldn’t tell them which actors and directors to employ or even how the precise details of a story should be filled in. A sympathetic hero, after all, could be a little fish named Nemo or big ape named King Kong.
Ratings and Financial Success
In the second movie-industry study, “Antecedents and Consequences of Third-Party Products Evaluation Systems: Lessons from the International Motion Picture Industry,” Eliashberg and his co-author, Mark Leenders, show that ratings of the same film can differ radically from country to country. Raters in the United States, for example, tend to be far stricter than those in France, limiting access for kids and teens to more movies. In fact, the two scholars found that, “all non-U.S. countries we studied tend to be more lenient, relative to the U.S.”
Why should business scholars puzzle over something that’s usually a worry only of protective parents? For one thing, a movie’s ratings influence its financial success. “G, PG, and to a lesser extent PG-13 ratings are associated with better performance metrics, such as domestic box-office tickets sales, video revenues, and return on investment,” Eliashberg and Leenders note.
For another, ratings and warning labels don’t just apply to movies. Many kinds of products are reviewed by either government agencies, as with pharmaceuticals, or industry boards, as with music recordings and video games. By studying ratings of all sorts across countries, marketing managers can understand how boards are likely to rate their products and how they might position them in various markets, the authors say.
Methods of rating movies differ by country. In the United States, ratings are voluntary, though films without them typically fare poorly at the box office. In Australia, they are required. Likewise, U.S. ratings are assigned by a panel of consumers set up by the Motion Picture Association of America, while Australia’s are provided by a government agency.
For their analysis, Eliashberg and Leenders gathered data on all internationally released mainstream movies distributed by major studios in the United States between December 29, 1996, and January 3, 1999. They also collected information on these movies’ ratings and performance in Australia, Hong Kong, Italy, Spain, Germany, France and the United Kingdom.
Drawing on earlier research, they then classified countries’ cultures as more “masculine” or “feminine.” “More masculine societies, like Spain and Italy, tend to place greater value on wealth, success, ambition, material things and achievement, whereas more feminine societies, like the Netherlands, tend to place greater emphasis on people, helping others, preserving the environment and equality of life,” they explain.
Movies with restrictive ratings, like R or NC-17 in the United States, saw better returns in more masculine countries, while those with less restrictive ratings, like G or PG, prevailed in less masculine ones, they found. Or put another way, viewers in more masculine countries saw restrictively rated movies as “forbidden fruit” to be eaten, while those in more feminine countries considered them “tainted fruit” to be avoided. Likewise, boards in more masculine countries tended to rate violent movies more leniently.
The two scholars also examined the size and composition of ratings boards. Their analysis revealed that boards that include industry experts tended to be more lenient, while larger ones tended to be more restrictive. From a public policy point of view, this suggests that countries aiming to limit the exposure of children and teens to sex and violence in films might want to create larger boards composed of lay people, they say.
“Our results show that movie ratings play a significant role in determining a movie’s commercial success in the sense that they can exclude consumers (tainted fruit) as well as attract consumers (forbidden fruit),” they write. “The balance between these competing forces is to some extent dependent on the local culture.
“Since the demand is sensitive to these evaluations, the appropriate international marketing strategy may be ‘local adaptation.’ In the motion-picture industry, there are already different versions of DVDs in place with different directors’ cuts as well as movies with different scenes and endings.”
While Eliashberg, the scholar, gets great satisfaction from pondering these sorts of Hollywood problems, Eliashberg the movie buff — he calls Casablanca, High Noon and Chinatown his all-time favorites — finds himself less able these days to marvel over films in the way he once did. Today, when watching a movie in a theater, he says, “I’m analyzing, in real time, the sorts of stuff that my script-reading model focuses on. Plus, I have become very sensitive to the audience composition — age, gender — and their reactions to various scenes.”