An Unbiased Way to Rank Schools: Some Changes in the LineupPublished: January 12, 2005 in Knowledge@Wharton
Ah, December, a time of holiday celebrations, shopping mall traffic jams, festive family gatherings - and for many high school seniors and their parents, some nail-biting days waiting to get a particularly important piece of news.
It's early decision time.
Early decision (ED), an increasingly popular and controversial program offered by most private and many public colleges and universities in the U.S., allows applicants to apply to a first-choice college and get an early decision, often by mid-December, if they promise to attend (provided they are accepted). But in recent years, ED and other strategic admissions programs have come under fire, with critics contending that they are an easy way for schools to manipulate their admission and matriculation rates: The more students admitted under an early decision program, the higher a school's matriculation rate.
What's behind all the stage-managing? It comes down to the rankings, experts say. Colleges and universities, desperate to increase their standing in the closely-watched annual college ranking articles and guides, have quietly created programs and systems to boost the appearance of selectivity and desirability. Critics of college ranking guides point to the practice of encouraging applications from students who have little chance of being accepted, a custom that lowers a school's admissions percentage, thus making the school appear more selective. Or nudging up matriculation rates by rejecting applicants they think are applying to their college as a "safety" school and are not likely to attend.
In a recent study, titled "A Revealed Preference Ranking of U.S. Colleges and Universities," Wharton finance professor Andrew Metrick and his co-authors create a new, market-driven college ranking system they say would help end this maneuvering. Metrick and Harvard's Christopher Avery and Caroline Hoxby, as well as Mark Glickman from Boston University, built a statistical model they compare to ones used to rank professional chess players. It is a system that rates more than 100 colleges and universities based entirely on where America's best and brightest students actually decide to go.
"A lot of admissions people will tell you off the record that in an effort to move themselves up a couple of notches in the rankings, they resort to a number of things to improve their admissions' rate and matriculation rates that are really anti-competitive," Metrick says. "Some schools are admitting more than 40% of their freshman classes via early decision, which began as a wonderful program for students and schools but has become something entirely different today because of the pressure colleges feel to improve their standings in the rankings. I think that's a little out of hand. One way to relieve this pressure is to have some kind of measure that would allow you to talk about how selective and preferred a school is without inducing this counterproductive strategic behavior."
The study, submitted in October to the National Bureau of Economic Research, creates a model that relies solely on the real-world decisions of admitted students. "Our system extends models used for ranking players in tournaments, such as chess or tennis," Metrick says. "When a student decides to enroll at one college among those that have admitted him, he effectively decides which college won in head-to-head competition. This model efficiently combines the information contained in thousands of these wins and losses, and produces a ranking that would be very difficult for a college to manipulate."
But can colleges be judged based on who "wins" the competition for students? Metrick and his co-authors contend that they can. "First, students believe and act as though their peers matter," the researchers say in the study. "This may be because peer quality affects the level of teaching that is offered. Alternatively, students may learn directly from their peers. It is reasonable for students to care about whether they are surrounded by peers with high college aptitude ... Second, students - especially the high achieving students on whom we focus - are not ignorant about college quality. They gather information from publications, older siblings, friends who are attending college, college counselors and their own visits to colleges."
A ranking based on student preference, Metrick and his authors say, is an efficient way to amass observations about quality from thousands of students. They cite parallels in the food and hospitality industries, where consumers judge restaurant and hotel quality based partly on their own experiences, but also seek out others' opinions. "This is why there is a demand for guides like Zagat's, which aggregate people's observations about hotels and restaurants," they write.
In finding student candidates for the study, the researchers worked with guidance counselors from 510 high schools across the United States, producing a response rate of 65%, or 3,240 students. Top ranking seniors from those schools were surveyed and tracked, completing two questionnaires over the course of the academic year. Students were asked about their background and college applications, with each student listing up to 10 colleges where he or she had applied, as well as questions about his or her test scores, race, ultimate admission outcomes, financial aid and scholarship offers, and final matriculation decisions. The sample contained students from 43 states and the District of Columbia.
What schools came out on top? Not surprisingly, Harvard and Yale came out number one and two, with Stanford, Cal Tech, MIT and Princeton following. And while the top schools look much the same as the top liberal arts colleges and universities in the U.S. News & World Report rankings, their order changes, sometimes significantly. Duke, ranked fifth in U.S. News, drops to 19th, while Princeton, which U.S. News tied with Harvard in the number one spot, dipped to 6th. Other schools seemed to benefit. Brown University, which offers students flexibility in their curriculum, as well as Georgetown and Notre Dame, which have strong Catholic followings, fared better than on the U.S. News list.
Metrick has no interest in selling the student preference rating system he and his colleagues created or joining in the rankings frenzy by producing an annual list. His hope for the model, he says, is that the publishers of rankings guides will begin to consider using such unbiased, scientific measures when creating their guides. "If it were completely up to us, we would have the College Board or U.S News do a survey every year to gather this data and report it instead of the admission rate or the matriculation rate as a measure of desirability - because people are interested and they want to know," says Metrick. "This is just a better measure."
Further, the cost of gathering such data would be "a trivial share" of the revenues associated with college guides, and at least some of the data are already compiled by organizations like The College Board and the ACT (an SAT alternative) "so that gathering a highly representative sample should be very feasible," the authors write. If a student preference ranking based on "our procedure were used in place of manipulable indicators like crude admissions rate and crude matriculation rate, much of the pressure on colleges to manipulate admissions would be relieved."
"We aren't naïve enough to think that U.S News is going to see this and say, "We should replace our system with this one,'" says Metrick, acknowledging that college guide publishers were somewhat blasé about the study. "But we are academics and we want to get useful ideas out there. We think that this is important information and that students have a real interest in knowing what colleges and universities are most attractive to their peers."