Human resource professionals have begun to use sophisticated data analysis for all sorts of people-related issues ranging from recruitment and performance evaluation to promotion and compensation. People analytics, as this approach is called, is making waves because it is said to eliminate biases that exist in human judgment. Cade Massey, practice professor of operations and information management at Wharton, works at the intersection of psychology and economics and has focused on this approach. His expertise is judgment under uncertainty, looking at optimism, overconfidence and learning. Massey’s research is based on both laboratory experiments and archival studies of real-world behavior, such as the draft picks of professional football teams and the investment decisions of employees holding stock options.
Massey is one of the key speakers at a conference on People Analytics being hosted by The Wharton School on March 28. In this discussion with Knowledge at Wharton, Massey talks about the potential of this data-driven approach, its limitations and the lessons we can learn from the sports arena, which has been applying the discipline long before the corporate world.
An edited transcript of the conversation follows:
Knowledge at Wharton: People analytics has been described as a data-driven approach to managing people at work. What does that mean and how does it differ from traditional ways of managing people?
Cade Massey: Traditionally, people have made decisions intuitively around who to hire, to promote, to compensate the best…. They may use some numbers, but not in a systematic way. People analytics is an attempt to bring a little more systematic processes to these decisions. They are some of the most important decisions that an organization makes. We have seen good use of data in other fields — finance and marketing. This is the slow evolution of using this data in new fields.
Knowledge at Wharton: How exactly does that work? Does it really eliminate biases?
Massey: Well, it helps and this is one of the motivations I have. I study decision making and it is very much about the biases people have in their intuitions. These are hard to root out. Bringing some data to the process and [enabling] more systematic decision making, more systematic analysis, is one way of doing it. Eliminate is a big word. I am sure that there are some situations where we can take a bias all the way down, but mostly we are just trying to improve.
Knowledge at Wharton: Let’s start look at recruitment, which is the first step in a company’s engagement with an employee. Now say at the University of Pennsylvania, if we wanted to fill a job, we would post it on our website. We would look at all the resumes that came in. We would look through the resumes to see who were the most qualified people, call them in for an interview, and then try to find the person who was the best fit. Using people analytics, how does this process change?
“Start modeling the process and you start appreciating the difficulty…. But the idea is that we can improve, we can do better, we can be more accurate by adding some analysis to the intuition of the people who are making the decisions.”
Massey: If we were to come in and try to bring some of these tools to that process, the first thing we would probably do would be to look at as much historical data as possible to understand what attributes in these candidates predict long-term performance. Instead of interviewing them, we would look at their characteristics from their application and ask: What is the relationship between these observables and long-term performance?
It does not mean that we will give that model 100% discretion on who gets hired. But we use that model’s output to inform our decisions. So we will probably still do the interview. We will probably still have a group discussion. But to have some analytical rigor informing that discussion would be better than most processes that do not have it.
The other thing that we might do is to model the admissions decisions themselves. So rather than modeling long-term performance of the candidate we say, well, what are we doing right now? Who are we deciding to put in? And you can go in and find out, whether you know it or not, if you have been implicitly putting 20% weight on GPA and 50% weight on the prestige of the company they work for and 30% on [other factors].
You can just ask: What have we been doing? Even though we do not follow some rules, we are implicitly going to end up relying on some rules. That is often enlightening. An organization might not know that its process put that much weight [on a particular factor]. Maybe that is OK. But maybe [after seeing the data] they decide they would rather put weight on something else.
Knowledge at Wharton: You mentioned being able to predict performance. How accurate are these predictions typically?
Massey: Well, they are going to vary dramatically. They are never going to be perfect. These are noisy processes. One of the things — I think this is a virtue of people analytics — is how much chance there is, how much noise there is, how imperfect the process is. Even the best model, with the best data, is imperfect. That is actually an important lesson.
We do not typically [realize] that lesson in the decisions we make in the interview and hiring process. Usually we think: I can predict that. You remember the ones [whose performance] you predicted well and you conveniently forget the ones that you did not predict so well. You never develop that humility about the process. Start modeling the process and you start appreciating the difficulty. It’s imperfect prediction. But the idea is that we can improve, we can do better, we can be more accurate by adding some analysis to the intuition of the people who are making the decisions.
Knowledge at Wharton: I heard that companies like Google and Xerox have been using this approach to recruitment. How good are the results?
Massey: It is not all about modeling performance. It can be any kind of analytics applied to these processes. For example, one thing that Google did a few years ago was to ask: How good are interviews for predicting performance on the job? They discovered that the literature says they are not very good. People want to prove it to themselves; Google said we will study it ourselves. They ran the numbers and that discovered interviews do not do a whole lot. They serve other purposes, so maybe you do not want to throw them away. But they were spending hours of their managers’ time interviewing candidates — eight interviews, nine interviews, 10 interviews. And they discovered they do not do much [to predict future performance]. [Google said] let’s just cut that back. Let’s cut it down to the bare minimum. Let’s have three or four interviews. That is just being more analytical in your decision-making process or being more analytical to improve what you are doing organizationally.
“[Google asked:] How good are interviews for predicting performance on the job? They ran the numbers and discovered interviews do not do a whole lot.”
Knowledge at Wharton: That sounds very counterintuitive. I remember National Public Radio did a piece about Xerox and how they were trying to recruit people for call centers. One of the counterintuitive things they discovered was that, if somebody had a lot of experience working for different call centers, it was not necessarily a good thing. It might just be that they had a high burnout rate. It was, in fact, a predictor of potentially bad performance. Through using people analytics, have there been other such counterintuitive findings?
Massey: I got off the phone with an NFL team two hours ago. This team is putting a great deal of effort into the NFL draft. All teams do to some extent. This team happens to be one of the most sophisticated in using analytics. This conversation was about their discovering that some things matter for a particular position that nobody would have expected. They are doing new and better analysis, and they are discovering that one of the most important predictors is one that nobody considers.
This is another great virtue of let’s-bring-some-data-to-the-conversation. We are not saying turn the decision over to the data. We are saying bring some data to the conversation because sometimes you discover these intuitions — or even conventional wisdom — might not only be wrong, but also opposite of what is actually going on.
Knowledge at Wharton: So people analytics can also be used in fields like sports. If a football team like the Philadelphia Eagles wanted to use analytics, how might they do it?
Massey: They are doing it. In many ways people decisions are easier in sports because we have so many observables. We see so many inputs to a decision — so many quantifiable inputs — and then we see so many outputs. We actually see what these guys do on the field or on the court.
In fact, if you decide not to take a player, you often see how that player turns out in a way that you rarely do in non-sports organizations. If you hire one lawyer you do not often track the career of the other lawyer. In sports you can do all those things. So we can evaluate a team’s decisions much more carefully and have many more data points to work with.
As a result, for the teams that are interested, it is a great opportunity. And some teams have really taken advantage of this, the Eagles being one of them. The Eagles are using data at every point of the process. They hired a new coach last year who is very data oriented. We have a lot to learn from those guys because they are investing heavily in that.
Knowledge at Wharton: How can people analytics be used in performance evaluation? This is typically one of the things that a lot of managers dread because the idea of giving negative feedback to employees is often quite stressful.
Massey: One of the challenges with performance evaluations is a risk of wanting to quantify because it is easier if it is not your opinion. You just check boxes. There are some things [that are] observable or measurable and [you can] put all the blame on the number. But if you evaluate that way, all the weight gets put on these things. You might care about some other things that might not be objectively quantifiable. So performance evaluations necessarily require a mix of subjective evaluation and some kind of objective measurement.
We are never going to take away the subjective piece. But the idea is that we need to be as systematic and consistent as possible with the quantified piece. So this is a very general approach. It could be any attempt to analyze which measures are most reliable over time.
“We are not saying turn the decision over to the data. We are saying bring some data to the conversation.”
A classic example is performance in fund management in the investment world. Bonuses are a big part of compensation there and bonuses are typically related to how your fund performed — not perfectly, but that is an important input. There have been studies in some places that sometimes — I am not going to say all places, all times — the relationship between a fund manager’s performance in one year is unconnected to his performance the next year.
The idea is, if that is true, then there is a lot of chance in this process and these differences are not functions of skill. If they are not functions of skill, then maybe we should not be rewarding them heavily each year [their fund performs well]. That is a hard case to make and it is hard for people to appreciate if you just talk about it. But if you bring data and you run the numbers and you do a thorough study, you might be able to convince them. You can actually figure out how much of this is … skill-based and how much of it is chance.
Knowledge at Wharton: Can you use people analytics to evaluate leadership potential?
Massey: That is interesting. I know less about this because fewer organizations are doing it. But it has to be the case that there is potential there and I am sure there are organizations that are already doing that. It is hard. I had a conversation one time with a person who had come up through the people analytics [side of an] organization and then moved to talent management. In the new role, part of her job was to decide on promotion to the executive level. This was the key promotion level in the organization.
I had worked with her for a couple of years in the people analytics side of things. She has a Ph.D. She is very data oriented. So when I saw her for the first time in this talent management role, I assumed that there would be a model and some data and she would be using analytics. But she’s like, “Oh, no, we can’t do that. That’s too important a decision.” I feel I’m in trouble if she feels that way after all these years. But I am sure that over time it is going to play a role there. It is a harder thing, clearly, but that kind of makes it fun. What is it that we could observe about employees early in their career that would tell us [whether] they have exceptional promise? It is an interesting, challenging and important question.
Knowledge at Wharton: Very closely related to leadership is the whole idea of teamwork because so many companies require employees to work collaboratively. And, increasingly, these teams are not based in the same location. They may be in different cities, maybe even different countries. Can people analytics tell us anything about how teamwork can become more effective?
Massey: I would like to think so. It is hard to imagine that you couldn’t. The way to think about it is that it is essentially just analysis. It is just quantitative analysis. The leap is that we do not typically think about doing it for this very interpersonal or traditionally soft issue. So it is simply how we can measure something about the group that would allow us to say something. This is where sports are informative.
“What could we observe about employees early in their career that would tell us [whether] they have exceptional promise? It is an interesting, challenging and important question.”
How we translate it to sports is a challenge and it is going to take time. But, for example, in hockey, they have had to learn how to evaluate players even though you never see a player on the ice by himself. You always see him on a team of six people…. Basketball is the same thing; you have to figure out how to evaluate a player on the court with five of his teammates. He has five opponents as well.
So they have had to come up with ways of evaluating team performance that reflects something about the individual. How do we pull the individual out of that group? Teams are getting more and more sophisticated at how to do that. Maybe there are things that we can borrow from the NBA that would help us better understand group performance on an engagement team at McKinsey, for example.
Knowledge at Wharton: What do you believe to be some of the greatest concerns about people analytics? For example, how do companies draw boundaries around privacy, and the use of data and metadata? Is there any regulation around these kinds of issues?
Massey: I do not know the answer on the regulation side. It is certainly a sensitive issue. And I have seen changes in companies’ willingness to collect this data in the last three-to-four years. Early on, in a conversation, I was sitting next to a sociologist. This particular sociologist’s specialty was studying 12th century churches. I am much more of a psychologist and if I think about data — if I am going to get really creative data — it is going to be something like cortisol [levels]. Companies do not let us collect cortisol [levels] from their employees. This is a guy who had to be very creative about collecting data. He studies 12th century churches. He came up with this idea of how we can monitor passively where employees are. He had all these interesting ideas that might actually say something about the organizational culture. I was very impressed.
That was from a very different disciplinary perspective. At the time, even that seemed scary. Now you read about little IDs on badges that capture where everybody is all the time, or meters on chairs so they will know when people are actually using a space. That is where the world is going. And it is scary to some people. It is obviously rife with ethical issues and that is a risk.
Knowledge at Wharton: One last question. If you were to look at the future, what do you think people analytics will be able to do, say, five years from now that it cannot do it today?
Massey: That’s interesting. It has to come from the big data side of things and the passive data side of things. Again, I look at the frontier of this, which is in the sports world. To watch what they have done in the last five years is to see, I think, where we are going to go in the next 20. So the NBA, for example, now tracks every player and the ball multiple times a second throughout the game. Given enough computing power — and enough PhDs — some people have been able to go in and do some really good performance evaluation with that data. It took a lot of resources and it took some good thinking, but they have come up with incredibly insightful performance evaluations based on that data. I think we are just going to keep on discovering more and more examples as those kinds of technologies filter down to non-sports teams.