I’m the subject of a two-part series currently appearing in Bloomberg BNA. Here are the links:
In a recent interview with Lee Pacchia of Bloomberg News, U.S. News & World Report’s director of data research Robert Morse explained this year’s only revision to his law school rankings methodology. Morse gave different weights to various employment outcomes for class of 2011 graduates. But he didn’t disclose precisely what those different weights were.
Morse said that such transparency worried him. Full-time, long-term jobs requiring a legal degree got 100 percent credit. But he didn’t reveal the weight he gave other employment categories (part-time, short-term, non-J.D.-required) because he didn’t want deans “gaming the reporting of their results.” It was an interesting choice of words.
In some ways, all of the attention to the changes in this year’s rankings methodology is remarkable. Certainly, a school’s employment success for graduates is important. But the nine-month data point for which the ABA now requires more detailed information accounts for only 14 percent of a school’s total U.S. News ranking score. To put that in context, consider some of the more consequential rankings criteria.
Fifteen percent of every school’s U.S. News score is based on a non-scientific survey of practicing lawyers and judges. The survey response rate this year was only nine percent.
Likewise, the “peer assessment” survey that goes to four faculty members at every accredited law school — dean, dean of academic affairs, chair of faculty appointments, and most recently tenured faculty member — accounts for 25 percent of a school’s ranking score. It asks those four individuals to rate all ABA-accredited law schools from 1-to-5, without requiring that any respondent know anything about the schools he or she assesses.
Taken together, the two so-called “quality assessment” surveys comprising 40 percent of every school’s ranking are a self-reinforcing contest for brand recognition. As measures of substantive educational value, well, you decide.
Game of moans
But if, as Morse suggests, his concern is “gaming the reporting,” he must be worried that some deans would either: 1) self-report inaccurate data; or 2) otherwise change their behavior in an effort to raise their school’s ranking. He’s a bit late to both parties.
Scandals engulfed prominent law schools that submitted false LSAT and GPA statistics for their entering classes. But how many others haven’t been caught cheating? No one knows. As for permissible behavior that accomplishes similar objectives, examples abound.
For years, deans seeking to enhance the 12.5 percent of the rankings component relating to median LSAT scores for J.D. entrants have been “buying” higher LSATs through “merit” scholarships. Need-based financial aid has suffered. Ironically, those merit scholarships often disappear after the first year of law school.
Likewise, the faculty resources component is 15 percent of every school’s ranking. But it encourages expenditures — and skyrocketing tuition — without regard to whether they benefit a student’s educational experience.
Whom to blame
Morse establishes the criteria and methodology that incentivize behavior producing these and many other perverse outcomes. But he doesn’t think that any of the current problems confronting the profession are his fault.
“U.S. News isn’t the ABA,” he told Pacchia. “U.S. News doesn’t regulate the reporting requirements…[W]e’re not responsible for the cost of law school, the state of legal employment, the impact that recession has had on hiring, or the fact that 10 or 20 new law schools have opened over the last couple decades. We’re not responsible for the imbalance of jobs to graduates. No, I think we’re not responsible. I think we’ve helped prospective students understand what they are getting into than they were previously.”
Of course, the problem isn’t just the flawed rankings methodology itself. Also culpable are the decision-makers who regard a single overall ranking as meaningful — students, deans, university administrators, and trustees. Without their blind deference to a superficially appealing metric, the U.S. News rankings would disappear — just as the U.S. News & World Report print news magazine did years ago.
Pervasive throughout society, rankings may be a permanent feature of the legal profession. But it’s worth remembering that they’re relatively new. Before the first U.S. News list of only the top 20 law schools in 1987, prospective students and law schools somehow found each other.
Today, rankings facilitate laziness. The illusory comfort of an unambiguous numerical solution is easier than engaging in critical thought and exercising independent judgment. Forgotten along the way is the computer science maxim “garbage in, garbage out.”