Field of Science

Student Evaluations and Commercialization of Courses

There has been discussion at the University of Minnesota regarding the release of student course evaluation information to the student body. Last year I was sent an email asking if I would like to have certain aspects of my course evaluation information available to students. I also received an email asking if I thought it was a good idea to release a 'top 30% of teachers' based on student evaluations. Last month the school paper, Minnesota Daily, published an article discussing these issues and the fact that the Minnesota Student Alliance wants more faculty to agree to these releases. Now that my fall course are complete and I have my current student evaluations in hand, I want to comment on some of these issues.

For full disclosure, I declined to have my student evaluation information released and I disagreed with the idea of releasing a top 30% of teachers. Apparently I am in good company because 90% of faculty did not agree to release their student evaluation data and the Senate Committee on Educational Policy ixnayed the top 30% idea. Now you may be asking why did I disagree with the release of information, which is a good question. However, I think it's actually a good second question to be asked, the first being, what is the goal of releasing this information?

Today I only want to deal with the release of student evaluation information.

In regards to the release of student evaluations Vice Provost and Dean of Undergraduate Education says that 'the student release questions are designed to give students more complete information about a course and its teacher.' That may be true, let's look at the questions that are up for release (only specific questions on the course evaluation are released to the student body).

There are 9 released questions:
  1. Approximately how many hours per week do you spend working on homework, reading, and projects for this course: 0-2, 3-5, 6-9, 10-14, or 15+ hours/week.
  2. Compared to other courses at this level, the amount I have learned in this course is: less, about the same, more, I have not taken other courses at this level.
  3. Compared to other courses at this level, the difficulty of this course is: less, about the same, more, I have not taken other courses at this level.
  4. I would recommend this course to other students: Yes, No.
  5. I would recommend this instructor to other students: Yes, No.
     Rate your instructor in terms of the following characteristics
  1. Is approachable: Agree, Somewhat (SW) agree, SW disagree, disagree, NA
  2. Makes effective use of course readings: Agree, SW agree, SW disagree, disagree, NA
  3. Creates worthwhile assignments: Agree, SW agree, SW disagree, disagree, NA
  4. Has a reasonable grading system: Agree, SW agree, SW disagree, disagree, NA
OK, do these questions yield a more complete picture of the course and its teacher? and is this information only available from these surveys? The answer to question 1 is 'yes and no, but really no' and the answer to question 2 is 'no'.

I want to deal with the second question 'is this information only available from these surveys?' first. The short answer is no. As noted in the article many students use Rate My Professors as a source of information. The director of the MSA's University Policies and Student Concerns Committee noted that 'Rate My Professors isn't a sufficient tool to learn about teachers' and that it 'deals with trivial items like "hotness."' Well here's what Rate My Professors obtains scores on:
  • Overall Quality
  • Helpfulness
  • Clarity
  • Easiness
  • Hotness
If I compare the two evaluation systems

Overall Quality = Amount I learned, Recommend class/instructor, worthwhile
Helpfulness = Recommend instructor, Approachable
Clarity = Recommend instructor
Easiness = Hrs/week, Difficulty, Reasonable grading system
Hotness = Approachable?

Rate My Professor data
Now we can argue over the particulars, but I think the two systems basically provide the same information to prospective students for enrolling in your class. There are more categories (questions) in the university survey, but you should note that Rate My Professors allows the inclusion of comments, which the university survey does not. So you can get information like this:
One of the best professors I've ever had. This is NOT an easy class and there is a lot of work, but I have never taken a more worthwhile course - I learned critical thinking skills that I'll use for the rest of my life. As a bonus, Lorax is incredibly funny and really cares about his students and his eukaryotic microbes. 
or this 
This man is useless and biased. Class should be renamed "Lab Techniques in Eukaryotic Microbiology."
These are the two comments related to an advanced course I teach along with the scores they gave me in three scorable categories. I would argue that both comments provide some amount of information about the course that is helpful to students and is not included in the university survey. The first comment is positive about me and the course, but clearly notes that it 'is NOT easy'. Also, I'm apparently some kind of comedian (cue Joe Pesci). The second comment is negative although the two statements are not logically linked or correct. I am not useless, at the very least my body is serving as host to trillions of microbes. I am biased, as is everyone, so that's not too helpful either. We could use some context here, but I expect student #1 did well in the course and student #2 did poorly. This latter hypothesis is supported by their inability to make a clear argument. Regardless, a prospective student learns something about the course from the latter comment in the second sentence, which is that we do discuss a lot of techniques used in molecular biology (not eukaryotic microbiology as the student suggested). Again this information is not released on the student surveys.

In my opinion, the reality is that the university survey data in no better than Rate My Professor data. As an aside, I must point out that student course evaluations are of limited value. Numerous studies have demonstrated that a student evaluation is correlated with the student's expected grade in the course and not teaching effectiveness. One of the better studies I've seen takes this correlation to it's logical conclusion:
From a policy viewpoint, the findings of this study are important. As an increasing number of universities use student evaluations of teaching in administrative decisions that affect the careers of their faculty, the incentives for faculty to manipulate their grading policies in order to enhance their evaluations increase. Because grading policies affect student enrollment decisions and the amount students learn in their courses, the ultimate consequence of such manipulations is the degradation of the quality of education in the United States.

So here is one strong reason I did not allow my evaluation data to be released. I think student evaluations are by and large inappropriately used and are of limited value in both positive and negative directions. (I'm getting a sense of deja vu, oh right.) This is why I said 'Yes and No, but really No' in response to the question 'do these questions yield a more complete picture of the course and its teacher?'

It is interesting that there is no consideration that students may in fact talk with each other about courses and instructors. (By talk I mean text, tweet, or otherwise interact in a non-physical forum.)

Two closing tangential points:

1. It was noted in the article that a professor thought the percentage of those who release the data is 'disappointingly low'. However, there is no indication in the article for why this professor thinks it is disappointing. This professor also suggested that the percentage was low because faculty have to opt in and suggests that the system should be changed to force faculty to opt out. I find this to be a cynical and backhanded approach to the problem.

The professor may think that faculty did not realize they had to opt in, but really wanted to. His approach would treat that problem, but that indicates 90% of the faculty are not that bright. 

The professor may think that faculty are too lazy to check the box on the form to opt in, but really want to. His approach would also treat that problem, but that indicates 90% of the faculty are lazy shits.

The professor may think either of the two above choices are correct, but really the faculty do not want to release the information. His approach would then get those too stupid or lazy to opt out. This increases the numbers but goes against the wishes of the faculty.

2. All the people interviewed for the article who want faculty to release their student evaluation data were at a loss for why faculty do not release it. Here's a thought. Since 90% of faculty do not release their data, maybe you could fucking ask some of your colleagues/faculty members their reasons. Maybe they are too lazy or they simply forgot to check the box. But maybe, just maybe, some faculty think these evaluation are extremely poor resources to assess the quality of a course/instructor and do not want to contribute to the administrative mind-set that student course evaluations yield useful information regarding much more than the students' predicted grades.


The Phytophactor said...

For reasons not altogether clear, my teaching evaluations have always been pretty high. Student evaluations clearly tell me what they like and dislike, but that's about it. They just aren't very good as assessing teaching beyond their likes and how they are doing grade-wise, i.e., well=a good teacher, badly=a poor teacher. But in terms of evaluating the evaluations, a teacher who pleases all of the students all of the time is probably pandering and certainly is not sufficiently challenging his students. A certain percentage of students simply don't want a challenge. And if you want to tell simply look at their exams and assignments. Cake is easy to recognize.

The Lorax said...

Agreed on all points.

Also, cake is extremely tasty....think I'll spend some time in the kitchen this evening now.

Becca said...

If they rolled out a proposal to tie faculty bonuses to student evaluations, I could see why you'd be against them. But really, I don't see why it matters how useless you think the data are, if you aren't going to be the one using them for your course selection. What this does is formalize some of the ways students are already discussing your courses (the students who really benefit are those who aren't plugged into social networks like the greek system, who are already getting lots of perspectives on instructors from students who are ahead of them).

Also, whether a system is "opt in" or "opt out" IS predicated on the assumption that people, by default, are selfish and or/lazy. There really is no other way to explain, for example, abysmal organ donation rates ( Do professors donate organs at much higher rates than the general population? Do you have any data to support the notion that you people somehow magical special selfless unicorns and your resistance to change is anything other than rank contempt for the people you're working with?

Additionally, structuring things as opt out sends a message that participating is the norm. The social norm that student opinions on courses matters is a healthy one, though there are unhealthy things that can be done with the data.

The Lorax said...


Thanks for the comment. First I am against the release because I find the justifications given by the administration misguided (stupid would be another word). It's no different than recommending students check out RateMyProfessor for information. Students not plugged in do not benefit from the university system anymore than RateMyProfessor. There may be validity to releasing the data, but the administration did not reveal what that is and it is not my job to figure it out for them, they get paid much more than I do.

I see where you are coming from on organ donation, but do not have the data you requested. Regardless, I disagree with you that faculty are selfish and/or lazy (as a whole). If I were given a reason I thought was valid, I would check the box.

Finally, I agree that student opinions matter. I take the opinions my students give me seriously and in fact I request information in general and through specific questions. Each year I look back and try to improve my course. Unfortunately 'how many hours a week do you devote to this course' is not helpful.*

*Unless the average was beyond that indicated by the university, which for my course is 9 hours per week for the average student to obtain a C.