Weaponizing Student Evaluations (Part III)

LawProfBlawg offers some ways to fix student evaluations.

For the past two weeks, I have been talking about the weaknesses of student evaluations.  They house inherent biases against women and minority faculty members.  They are often used as a means of harassment.  Faculty members use them to confirm their own institutional biases.  And they tell us precious little about the teaching environment.  Administrators are unlikely to get rid of student evaluations and defend their use.  “They are flawed,” an administrator might say out of one side of his mouth all the while using them.

Here are some recommendations:

  1. Student evaluations should be anonymous to professors, but not to administration. The ability of administration to find students who use student evaluations for purposes of harassing or bullying professors is important to protect the university against future hostile work environment claims.  Otherwise, you’re forcing female faculty members to read sexually charged comments that have nothing to do with pedagogy.

 

  1. Student evaluations should be tracked across courses for each student. Imagine the misogynist student who only takes classes from women.  Wouldn’t it be nice to know that’s happening when appraising the veracity of that student’s evaluation?  What if the student is racist? The only way to do that is to establish more data tracking a student’s evaluation biases across courses and professors.  Of course, that means we need data about the professor as well, such as race and gender.

 

  1. Analysis of student evaluations should actually utilize information about the evaluator. That means information about the student is needed, and that information should be acquired after the evaluation is submitted. What grade did the student earn in the class?  Information about the student’s race and gender can be important signals as well.  What if the professor is saying things in the classroom (things like this) that signal negatively toward minority students?  This might help serve as a signal.

Sponsored

 

  1. Student evaluations should be about the course and the instructor. What I mean by this is more data is needed about the professor teaching the course.  This would allow for cross-course comparisons of a meaningful sort.  It could also inform the degree to which your own institution has the biases well described and proven in the literature.  Did the professor have five new preps in three years because the associate dean was targeting the professor already?

 

  1. Comparisons in student evaluations should not be made on an aggregate level. Imagine a professor who teaches only mandatory upper-level courses like Professional Responsibility being compared against the ever-popular Imaginary Unicorn Law course.  The comparison is analytically wrong.  Now, if a mandatory course is doing poorly on average against the Unicorn Law course, that data may communicate some information about the mandatory course. But in terms of teaching style the information is probably not all that useful.

 

Sponsored

  1. Evaluations by the Pollyannaish and the pissed should be discounted. If a student gives each professor in every course they take the highest ranking, what information is being communicated? The same question arises if a student is already pissed at the school and is taking it out on faculty evaluations.  In both instances, that student’s evaluations ought to be discounted.

 

  1. Do not ask open-ended questions that invite problems. Questions should be more targeted.  “What did you like about the instructor” is just asking for problematic responses.

 

  1. Don’t ask questions that call upon the student to be psychic. Here’s an example (although I admit, a lame one).  “Was the instructor prepared?” A better question would be: “What indications did you have of the instructor’s preparedness?”  This doesn’t resolve the problem, but it does get closer to discussing things that are important to pedagogy. If you think I’m wrong, imagine if your school has a question about how many classes the professor has missed.  Every school and department in which I have taught has that question.  And every year the students do not agree on how many classes I’ve missed, even if I have not missed any.

 

  1. Do not ask students to evaluate questions beyond their skill set. Asking a student “will this course help you in your practice of law?” is not likely to yield any important information because students most likely haven’t practiced in that area yet – or practiced at all.

 

  1. Associate Deans and Department Chairs need to be more sophisticated about interpreting evaluations.  Don’t be my former associate dean who called me in to talk about how terrible a teacher I was after I got flamed on one evaluation.  Don’t be that person.  It’s possible a professor’s teaching evaluations will fluctuate. Is the professor enduring some personal trauma?  Is the professor burnt out?  Longitudinal analysis might give some insight.  But only if it is used for the good of helping the professor grow as a teacher, not for retaliation and punishment.

 

  1. Have people on faculty hiring or promotion and tenure committees who understand the problems with student evaluations. If I had a dime for every time a minority candidate lost out because her teaching evaluations were lower than the white candidate, I’d have a lot of dimes.  If there are people on your committees who say things like “maybe white men just teach better” or demonstrate bias based on race, socioeconomic status, gender, sexual identity, or alma mater, they shouldn’t be on those committees at all.

 

LawProfBlawg is an anonymous professor at a top 100 law school. You can see more of his musings here He is way funnier on social media, he claims.  Please follow him on Twitter (@lawprofblawg) or Facebook. Email him at lawprofblawg@gmail.com.