As some of you may have heard, U.S. News & World Report, which used to be a magazine found in dentists’ offices, released its annual law school rankings last week. This event sparked even more than the usual amount of angst and hysteria among law deans and students. Well, then again, this is already the 9th post on ATL concerning this set of rankings, so maybe we’re not helping. Some deans’ heads have rolled already, and angry student petitions are calling for more blood. (Do these reactions among law students run one way though? The anger sparked by a drop in rankings does not necessarily mean an inverse spike in happiness when a school climbs up, as this great pairing of gifs from someone at Chicago Law illustrates.)
Anyway, much of the heightened attention is due to the revisions U.S. News made to their rankings methodology, which now applies different weights to different employment outcomes, giving full weight only to full-time jobs where “bar passage is required or a J.D. gives them an advantage.” Whatever that last bit means. And they won’t tell us exactly how “part-time” and other categories of employment outcomes factor in. But it is at least an acknowledgement on their part of the perception that, as Staci said yesterday, “all anyone cares about are employment statistics.” (We’ll get back to whether that’s strictly true.) Then again, if employment outcomes make up only 14% of your ranking formula for a professional school, you’re doing it wrong. What would a better, more relevant rankings methodology even look like?
Stephen Harper observed last week, “[R]ankings facilitate laziness. The illusory comfort of an unambiguous numerical solution is easier than engaging in critical thought and exercising independent judgment.”
But as the history books and Homer Simpson have taught us, whenever “laziness” and “comfort” collide with “critical thought” and “independent judgment,” the latter two end up as road kill. Every time. We can lament this fact of life all we want, but people want guidance. Preferably in the form of easy-to-understand lists.
The various inputs that make up the U.S. News’ formula are well known to all of us: LSAT, GPA, employment data, reputational surveys, “resources” (i.e. spending), etc. We hardly need any more evidence of the soaring unpopularity of the U.S. News approach and the resulting product. But what factors would go into a more useful, more rational rankings methodology?
The compulsion to rank law schools will never leave us. And pace Staci above, a rankings scheme based solely on a single, flawed dataset can’t be the answer. We would love to hear what factors you believe ought to be included (and ignored) in a hypothetical new, improved approach to ranking law schools. In addition to the components of the U.S. News methodology, other possible considerations range from faculty scholarship to federal clerkship placement. There are myriad possibilities. Please take a two-minute survey here and let us know what you think.
A final note: one data point that would be absolutely great to know when rating/ranking/choosing among law schools would be the relative federal student loan default rates of its graduates. This would be a stark, telling indicator of ROI and how well alumni were prepared to face the rigors of the job market. Alas, when we contacted the Department of Education looking for this information, we were told that default data is segmented only by something called an “Office of Postsecondary Education Identification Number” and nothing else, so the stats for individual institutions within a university system do not exist, at least as far as the DOE is concerned. In other words, for the purposes of tracking the default rates at, for example, Harvard, the DOE lumps all the alumni of the business, medical, law, divinity, and all the other grad schools into the same hopper, with no way to untangle the data. You’re doing it wrong, government.