Weaponizing Student Evaluations

Biases in student evaluations are used to reinforce the biases of departments seeking to deny tenure to professors who aren’t white males.   

I remember a poignant conversation with an associate dean.  The associate dean called me into his office because one of my courses had lower than average student evaluations.  More specifically, one evaluation had something along the lines of “I HATE YOU!  I HATE YOUR BOOK!  I HATE YOUR FACE!   I JUST HATE EVERYTHING ABOUT YOU!”  You know, the important feedback we are supposed to get from evaluations.

The associate dean then proceeded to talk to me as if I were a terrible teacher.  I called him out: “Can we stop talking like I’m a terrible teacher?”  My evaluations were, apart from this outlier, completely fine. He replied, “We’d be having this discussion even if you were a great teacher!”  Yes, he said that.  I then looked up his student evaluations.  Let’s just say he wasn’t “a great teacher,” either.

I now instruct my students not to cause drama on student evaluations. I’m fine with comments about pedagogy, about the curriculum, and statements that will help me improve as a teacher.  But that isn’t usually what I get.  I get numbers and either “You rock!” or “I hate you!”

But I’m tenured.  If the associate dean doesn’t like how I teach, the punishment is to put me in smaller classes with fewer students, with fewer exams to grade.  That’s right: That’s the punishment.  But it’s only punishment for those of us who enjoy and care about what we do, and the untenured for whom this might count as a death knell.

This is where the sexism and racism comes into the program.  I won’t rehash the literature about why student evaluations demonstrate sexism and racism.  I don’t have time or space enough to recount all of the studies.  There are a lot of them. Suffice to say is that the empirical evidence overwhelmingly shows that if you are a woman, a person of color, or worst of all for evaluation purposes, a woman of color, you will get dinged in student evaluations merely because of who you are.

But I do have space to talk about Ratemyprofessors.com.  As Professor Kristina Mitchell explains about her study, “We looked at the content of the comments in both the formal in-class student evaluations for his courses as compared to mine as well as the informal comments we received on the popular website Rate My Professors. We found that a male professor was more likely to receive comments about his qualification and competence, and that refer to him as ‘professor.’ We also found that a female professor was more likely to receive comments that mention her personality and her appearance, and that refer to her as a ‘teacher.’”

As for the numbers, Professor Mitchell continues, “The comments weren’t the only part of the evaluation process we examined. We also looked at the ordinal scale ratings of a man and a woman teaching identical online courses. Even though the male professor’s identical online course had a lower average final grade than the woman’s course, the man received higher evaluation scores on almost every question and in almost every category.”

Sponsored

Student evaluations become weapons.  I have seen them used to try to kill the chances of tenure for a Hispanic female professor who had already won a teaching award!  I have seen them ignored for white male professors who have strong publications.  I have seen them used to kill faculty candidates.  Worse, I hear stories that mirror the ones I have observed.

In short, I have seen student evaluations used by associate deans and students alike the way a chimpanzee might use a lightbulb: Not for illumination!  The biases in student evaluations then are used to reinforce the biases of the department seeking to deny tenure to professors who aren’t white males.

The comparative metric might be applied wrongly as well.  For example, should evaluations for a mandatory second-year class such as Professional Responsibility be compared to the fun-lovingness of a first-year Torts class?  Should we be concerned about courses such as legal writing (with predominantly female professors) who give feedback more often?   Studies show students retaliate in student evaluations when feedback isn’t to their liking.

Student evaluations might not even be correlated well with student outcomes over the long-run.  At least one study has indicated that often times the best teachers get the worst evaluations “when learning was measured as performance in subsequent related courses.”

According to Professor Michelle Falkoff, these aren’t the only concerns.  Student evaluation response rates have gone down over the years.  You can guess who (the happy or the angry) is most likely to fill them out.  Worse, because they are anonymous, “the tone of their comments has started to resemble that of Internet message boards, with more abuse and bullying.”  And professor Falkoff confirms that yes, students who were aware of some or all of their grades tended to be harder on faculty members in both written comments and numerical assessment.  Thanks a lot, new ABA standards.

Sponsored

Most administrators respond to these issues by nodding patiently and then proceeding to talk about the importance of good student evaluations.  What might be more helpful is perhaps to reimagine them.  Why are student evaluations anonymous (yes, I know the irony of me raising this issue)?  How does one improve the response rate?  What questions SHOULD we be asking on the evaluations?  To what degree are Universities just being lazy and making students do the work of observing our colleagues teaching?   And most importantly, why do we continue to rely on a metric that studies have proven time and time again to be biased against women and minorities?

As Professor Victor Ray states, student evaluations “are the perfect vehicle for a type of gender-blind discrimination because they allow one to claim detachment and objectivity. They pretend the ‘best qualified’ is measured and confirmed through a neutral process that just so happens to confirm the worst stereotypes about women. Recent research by Katherine Weisshaar shows that, even accounting for productivity, gendered differences in the way tenure committees evaluate women contributes to fewer women moving up.”

Two conclusions come to mind from this.  First, we don’t even know what the hell we are measuring with student evaluations.  Second, maybe it is just time to take away the weapon.


LawProfBlawg is an anonymous professor at a top 100 law school. You can see more of his musings here He is way funnier on social media, he claims.  Please follow him on Twitter (@lawprofblawg) or Facebook. Email him at lawprofblawg@gmail.com.