User-generated Content: The Medium Impacts the Message

mic Listen to the podcast:

Wharton's Shiri Melumad discusses her research on how user-generated content changes in tone based on the type of device used to create it.

From Yelp reviews about the corner pub’s burger of the month to comments about how much laundry can be stuffed into a high-efficiency washing machine, user-generated content is ubiquitous. Retailers and aggregator sites have made it easier than ever for customers to post their thoughts on everything from the quality of the service to the cleanliness of the bathrooms. In this avalanche of content, is there a difference in tone depending on what device is used to transmit the review? In her latest research, Wharton marketing professor Shiri Melumad finds that consumers who write out their thoughts on smartphones tend to be more emotional than those who wait until they get home to type on their personal computers. Her findings have implications for both marketers and consumers who rely on user-generated content to inform their decisions.

Melumad recently spoke with Knowledge@Wharton about her paper, “Selectively Emotional: How Smartphone Use Changes User-generated Content,” which was written with co-authors J. Jeffrey Inman, business professor at the University of Pittsburgh, and Michel Tuan Pham, marketing professor at Columbia Business School. (Listen to the podcast at the top of this page.)

An edited transcript of the conversation follows.

Knowledge@Wharton: What was the inspiration for this research?

Shiri Melumad: This research was actually inspired by patterns that I noticed in my own behavior. A few years ago, I started noticing that the way I expressed myself when I was writing certain types of content on my phone — things like work emails or messages to friends — differed quite a bit from how I expressed myself when I wrote the same type of content on my computer. I became really interested in whether any differences systematically arise when consumers generate content on their phone versus a personal computer and, if so, what are the factors that underlie these differences.

Knowledge@Wharton: We’re all creating tons of user-generated content every day, probably without even thinking about it. But for marketers, why is it important to really understand this and how people are doing it?

Melumad: There’s been an explosion of user-generated content in recent years — things like Facebook posts, Yelp reviews and so on. One of the reasons this matters is that customers are increasingly relying on this content as a critical source of information in the marketplace — for example, one study has shown that over 80% of customers rely on some form of user-generated content to inform their purchase decisions.

What this implies is that what’s written or what’s being conveyed in this user-generated content really matters. For example, [Wharton professors] Jonah Berger and Katy Milkman have research showing that content that contains greater emotionality is more likely to be shared and discussed by others online. What this suggests is that marketers need to understand not just what types of content are most likely to be influential to other customers, but also what factors give rise to the creation of such content.

Knowledge@Wharton: You tested this theory using two field studies and three controlled experiments. Why was it important to use both, and what did it allow you to test?

Melumad: In general, in my research I try to complement my experimental findings with field data whenever I can. In this particular paper, the first study I report is a field study of TripAdvisor reviews written by customers either on their smartphones or on their PCs. The reason I wanted to start with this was because I wanted to first establish that these effects actually arise in the real world.

Next, it was important for me to test whether these results hold in a lab setting, in part because it allowed me to experimentally control for a number of alternative explanations that I couldn’t control for in the field data. For example, it’s possible that any differences that we saw arise across devices in the TripAdvisor reviews might have arisen because phone users are simply more likely to write their review at the restaurant, whereas PC users typically wait until they get home to write their review, which may have influenced what they wrote about. There might have also been possible issues of self-selection in the field data — so, it could be the case that the type of consumer who tends to write a review on their phone is somehow substantively different from the type of consumer who tends to write a review on their PC.

This is why I wanted to run an experimental study. I brought participants into the lab and randomly assigned them to write a review either on their phone in one condition, or on their laptop in the other condition. This allowed me to control for any possible differences in temporal proximity to the dining experience, or possible issues of self-selection that might have arisen in the TripAdvisor data.

Knowledge@Wharton: You also use natural language processing as part of this study. Can you explain that?

Melumad: Across most of the studies, we used both natural language processing software that as well as human ratings of the content — the goal here was to find a way to quantify differences in content. Our focus was on whether differences in the degree of emotionality conveyed might arise across different devices. To do this, for example, we used a very well-established text analysis software called LIWC, which is used in many marketing studies. It has a dictionary of about 90 linguistic categories and, for a given text, it counts the number of words that falls within each of those 90 linguistic categories. The output is essentially a count-based measure — for example, it indicates the proportion of words in a given review that belong to its emotionality category. So that’s one way that we went about quantifying differences in emotionality.

“Marketers need to understand not just what types of content are most likely to be influential to other customers, but also what factors give rise to the creation of such content.”

We complemented this with having human raters — who were blind to the originating device as well as our hypotheses — essentially read the text and indicate the extent to which they saw the text as conveying emotionality. We found convergent validity with these two types of measures.

Knowledge@Wharton: It’s not too surprising to find out that people wrote less on their smartphones versus on their computers. But you also found a difference in what they wrote and what they were focused on. Can you talk about that a little bit?

Melumad: I found that, first, because of the smaller keyboard and screen available on our phone, we tend to write less when we’re writing, let’s say, online reviews. And because we’re writing less on our phones, we tend to focus on the overall essence or gist of what we’re trying to convey rather than including more specific details.

Importantly, especially in the context of writing a review of, let’s say, a service experience, the gist of what I want to say is going to tend to be based on my emotional evaluation of that experience. In other words, the use of our phones often results in a creation of content that’s more emotional than the use of our PCs. I want to also note that across our studies we consistently found that the greater emotionality of smartphone-generated content was predominantly driven by greater positive emotionality in particular.

And the last thing I’d like to note is that while in most of our studies we look at restaurant reviews, in the final study we were interested in whether these effects generalized to another domain of user-generated content. So, we looked at tweets about different pop culture topics. For example, if a tweet contained a trending hashtag like #WorseWaysToBecomeFamous, this would have been included in our data. We find that, first, tweets written on phones are actually shorter than tweets written on PCs. And again, we find that the content generated on phones is more emotional than that generated on PCs.

Knowledge@Wharton: What does this mean for those of us who are mining user-generated content to help us make decisions?

Melumad: I think this bears important implications for both other customers and for the brands themselves. Specifically, because reviews written on phones tend to be more emotional than those written on PCs, it’s possible that this content may be more diagnostic of customers’ actual feelings about the product or experience. Second, recall that Jonah Berger and Katy Milkman’s research shows that content that’s more emotional is more likely to go viral, or be shared and discussed by others. So, from the firm’s perspective, knowing that reviews have been written on phones — and thus are more likely to be emotional — can help the firm identify which customer-generated content may be the most influential to others.

Note that this information tends to be readily available: Firms are already collecting data on the originating device that their customers are using to browse their websites and, for consumers, certain websites like TripAdvisor indicate whether a review is written on mobile or not. So, it’s really important to take into account the particular device that was used to generate this content.

Knowledge@Wharton: I would think that mobile is probably growing faster than PC, that we’re probably going to see more and more of this content created on a mobile device.

Melumad: That’s right. In response to this, firms are now increasingly pursuing not just mobile-first strategies, but even mobile-only strategies.

Knowledge@Wharton: Are there specific ways that marketers could use this in terms of surfacing certain reviews? I could think of a situation where a lot of marketers will feature a couple of top reviews, and maybe they would want more of these mobile reviews than PC reviews?

Melumad: We need to be careful not to generalize too much, but I did mention a moment ago that the greater emotionality of smartphone-generated content is predominantly positive in nature. To that extent, I would imagine that a company would want to filter on the more positive reviews to begin with — but my results suggest that mobile reviews would tend to be more positively emotional.

Knowledge@Wharton: Do we lose something when user-generated content is focusing more on emotions? You pointed out that this is the type of content people are drawn to, that it’s more likely to go viral. But if it’s a review for a washing machine, we might want to know some of those nuts-and-bolts things that might get left out of an emotional review.

“The use of our phones often results in a creation of content that’s more emotional than the use of our PCs.”

Melumad: I think that’s a really interesting question. In reality, consumers don’t read reviews in a vacuum — meaning, I’ll tend to read a number of reviews before I make a decision.

We ran a preliminary study (that was ultimately not reported in the paper) where we try to look at the downstream consequences of this greater emotionality. We recruited a separate sample of participants and had them read a random selection of smartphone-generated reviews and PC-generated reviews. In one condition, they knew what device the reviews were written on, but in the other condition they were blind to the device. We found that, regardless of their knowledge of the originating device, readers were more interested in trying the restaurants that were reviewed on smartphones than on PCs. Now, you could imagine this is maybe largely attributable to the greater positive emotionality of smartphone-generated content. We find that it’s statistically mediated by the greater perceived emotionality of smartphone-generated reviews and, given that that’s predominantly positive, perhaps it’s not that surprising. But it was a very interesting result.

Knowledge@Wharton: Was it surprising that positive emotionality was more prevalent than negative? Because we’ve always heard that people are more likely to express themselves if they’ve had a bad experience versus a very positive one.

Melumad: It might seem a little bit counterintuitive at first, but a very consistent finding within the word-of-mouth literature is that word of mouth tends to be predominantly positive. The explanations that have been put forth for this largely argue that this is because of self-presentational concerns: I want to come off as a more positive person, so I’ll tend to share word of mouth that’s more positive. But that’s actually a very well-established finding in the literature, so we weren’t that surprised to find it.

Knowledge@Wharton: What’s next for this research?

Melumad: A number of things. I’m really interested in examining the downstream consequences of these differences. As I mentioned, we’ve run now a couple of preliminary studies where we find that people are more interested in trying restaurants that have been reviewed on phones than on PCs, and that this is statistically driven by the greater emotionality of those reviews.

I’m also really interested in examining how content generation differs across devices in a different context. In a newer project, we find that people not only write more emotionally on their phones but they’re also more willing to self-disclose on the device. Specifically, we’re finding that people seem to be more willing to communicate certain types of personal information when it’s elicited on their phones than on their PCs. This may seem paradoxical or a bit counterintuitive because these days we hear a lot about people being very concerned about data privacy on their phones. Yet what we’re finding is that, in certain contexts, people are more willing to express sensitive information about themselves when they’re writing on their phones.

I’m also really interested in exploring the process of consumption across devices. For example, in one project I use field data to examine differences in consumers’ browsing behaviors on a large news website. What we found – again, this is very preliminary — is that users browsing on their phones are more likely to click on sections like Entertainment, whereas users browsing on their computer are more likely to click on sections like Politics or Science & Tech. These results are consistent with the idea that perhaps we undergo this more emotional mindset when we’re on our phone relative to our computer, where maybe we undergo more of a cognitive mindset.

Citing Knowledge@Wharton

Close


For Personal use:

Please use the following citations to quote for personal use:

MLA

"User-generated Content: The Medium Impacts the Message." Knowledge@Wharton. The Wharton School, University of Pennsylvania, 07 May, 2019. Web. 13 November, 2019 <https://knowledge.wharton.upenn.edu/article/user-generated-content-marketing/>

APA

User-generated Content: The Medium Impacts the Message. Knowledge@Wharton (2019, May 07). Retrieved from https://knowledge.wharton.upenn.edu/article/user-generated-content-marketing/

Chicago

"User-generated Content: The Medium Impacts the Message" Knowledge@Wharton, May 07, 2019,
accessed November 13, 2019. https://knowledge.wharton.upenn.edu/article/user-generated-content-marketing/


For Educational/Business use:

Please contact us for repurposing articles, podcasts, or videos using our content licensing contact form.