The Problems Of Measuring Scholarly Impact (‘Stuff’)

If we’re seeking to adopt some measure to assess scholarly impact, there are serious caveats that need to be addressed before we begin.

Professor Robert Anderson at Pepperdine Law School (place from which I wouldn’t mind a job offer — HINT) asked me a series of questions on Twitter, all of which are important.

If you don’t know Professor Anderson, you should.  His Twitter feed is a discussion of scholarly impact, and things related to problems of measurement and hierarchies in academia.  I’ve found his tweets cause me to think.  I blame him for this blog post.

His tweet that got me to thinking was this one:  “My pal @lawprofblawg has got some points here, but I think at some point s/he has got to get a little more concrete with an alternative. Is the status quo working? Why would citation-based metrics be worse? Should law schools evaluate scholarship at all? How should hiring work?”

All good questions.  There was some discussion in that thread about the fact that we ought to have some measure of stuff.  In fact, we already do when we hire people to join the faculty, when they go up for tenure, and even (to varying degrees of noneffectiveness) when we review them post-tenure.  They are imperfect, filled with biases, and are often deployed in an arbitrary fashion.  Sometimes they are hardly metrics at all.

Nonetheless, Professor Anderson is correct: I am in favor of having some metrics.  However, the current metrics aren’t working.   My coauthor and I have explained the biases and entry barriers facing certain potential entrants into legal academia.  Eric Segall and Adam Feldman have explained that there is severe concentration in the legal academy, focused on only a handful of schools that produce the bulk of law professors.  While in the academy, barriers block advancement.  And those barriers are reflected, in my opinion, in current citation and scholarly impact measurements seeking to measure stuff.

But if we’re seeking to adopt some measure, whether it is a global standard that could ultimately replace U.S. News or just a standard at one’s own law school, I think there are serious caveats that need to be addressed before we begin to measure stuff.  When I have seen these issues raised before, I have seen them too quickly dismissed.  So let’s try again.

1.   Measure what and why?  I think the first thing to understand is the purpose behind the metrics.  Why are we seeking to measure?  What are we seeking to measure?  As I mentioned last week, some universities are mistakenly measuring quality via quantity measures. This is likely due to mistakenly defined goals, poor planning, or perhaps mere marketing (“We have increased our stuff!”)

Sponsored

I’m skeptical that scholarly impact metrics are capable of being used for any purpose.  If you have a colleague who is lagging in writing, do you really need metrics to inform them of that?  Are we seeking to measure the quality of the legal education a prospective student can expect to receive at a law school?  You’ll have to take some extra steps here to draw the link.  Are we using it to see who deserves to be hired or tenured?

If you’re planning on deploying the metric for these reasons, I have some serious concerns.  Let’s talk more about those.

2.  Does the metric inherently favor the usual suspects? Any metric deployed should be free of bias based upon gender, race, or class.  My outrage about metrics currently deployed is that a blind eye is usually turned to the fact that entry barriers into the academy have long led to some obvious results: The most cited people in the legal academy are white men.  That didn’t happen by accident or the consistently higher scholarly acumen of white men.  So, the current game is rigged, and any new metric must be free of such biases.

3.  Are we actually measuring things we aren’t seeking to measure?  Citation counts that pick up noise based onthe author’s alma mater are problematic.  As my coauthor and I mention:  “[When] the author’s alma mater is a strong determinant of whether the article gets published in the top law reviews in the first place, the game becomes transparent. Your best chances of getting published in a top-10 law journal are if you graduated from a top-10 school. Your best chances of getting strong citation counts are if you publish in a top-10 journal. Your best chances of getting into academia are if you come from one of the top-10 schools. Your best chances of being published in a top-10 law journal are if you teach at a top-10 law school. Your best chances….”  That creates a huge problem if what you’re trying to measure is impact of scholarship and not the impact of privilege.  

4.  Are we comparing apples to oranges?  If so, that will prove fruitless.  In my opinion, it is problematic to try to compare professors across legal academic subdisciplines.   I imagine we could weight the relevant markets in such a way that we could potentially come up with a metric that compares the broad fields of constitutional law or corporate law to the somewhat limited fields of underwater basket weaving law.  But I think those weights might be problematic given the network (and networking) effects within those disciplines.  And apart from bragging rights, what is the point of a larger metric?  One answer might be school rankings.  If that’s the case, depending on how poorly the metrics are deployed, curriculum might be affected as schools opt to not have professors in lower ranked fields.  That would create more problems, even for professors in the dominant networks.

Sponsored

5.  If the purpose of the metric is to measure quality, will use of the metric affect quality?  I think it is problematic if the metric is a standard across law schools.  To me, that suggests that competition on quality will be limited to whatever the metric is.  Of course, this is my objection to U.S. News and the quest for rankings.

Don’t think that’s a problem?  Let’s try a thought experiment. You wrote an article about the mental health issues that lawyers face.  Of the following, which would you rather have happen to you?  A) You receive a letter from a lawyer stating, “Your article saved my life.”  Or B) Your article gets cited by a prominent scholar who is cited and read all the time.  This scholar cites your paper in a series of articles, which increases your citation count.  Did A or B have more value?

Yes, I know the example is contrived.  But what I’m suggesting is that an article’s value may not be fully (or even partially) captured by traditional (and even potentially innovative) measures of scholarly impact.  I suppose usage data might pick up some of this, but I’m not convinced that will be the case.

6.  Any scholarly impact metric must be free from gaming. As an example, if my faculty of 430 full-time members have gotten together and agree to cite each other’s work in any way that is even remotely related to our article, that is a game that cannot be replicated by a school with 20 full-time faculty.  I can imagine law schools potentially pressuring their students to add citations to articles from existing faculty, and perhaps even eschewing citations from competitor schools.  If there is a way to game the system, the system will be gamed.

7.  Are we measuring location location location?  A top-10 law review placement is not a good proxy for article quality, but it does have effects in terms of citation measurement.  How do we account for the perceived quality of the journal?  And there is literature that even article placement is important to citation metrics (being lead article is best).  In a game in which my university seeks to have me publish in law reviews because that is how scholarly impact might be measured, I will probably opt not to publish in other media (say, peer-reviewed journals, even if I could reach a broader audience).  My blogging should probably stop, too, even though the number of people reading this will far exceed the number of people who have read my articles.  Okay, that’s depressing.

8.   Will we start writing to the beat of the metric? I feel the same way about this as I feel about schools that contend with standardized tests, only to ignore education in order to teach to the test.  If the point of scholarship is to get cited, then perhaps we ought to rethink this whole scholarship endeavor.  In my previous blog post, I contend that the purpose of scholarship is to make the world a better place.

A friend pointed out that I didn’t define what that means.  True.  It is something that ought to be discussed before we try to measure it.

Huh.  I guess that’s my larger point.

TempDean, my frequent coauthor on Above the Law, pointed out that “the only metric that matters is: Will anyone care 20 years after we’re gone? And no one wins.”  That’s a pretty harsh indictment.  But it is also consistent with my concern about how law professors search for meaning.


LawProfBlawg is an anonymous professor at a top 100 law school. You can see more of his musings here. He is way funnier on social media, he claims. Please follow him on Twitter (@lawprofblawg) or Facebook. Email him at lawprofblawg@gmail.com.