For full disclosure, I declined to have my student evaluation information released and I disagreed with the idea of releasing a top 30% of teachers. Apparently I am in good company because 90% of faculty did not agree to release their student evaluation data and the Senate Committee on Educational Policy ixnayed the top 30% idea. Now you may be asking why did I disagree with the release of information, which is a good question. However, I think it's actually a good second question to be asked, the first being, what is the goal of releasing this information?
Today I only want to deal with the release of student evaluation information.
In regards to the release of student evaluations Vice Provost and Dean of Undergraduate Education says that 'the student release questions are designed to give students more complete information about a course and its teacher.' That may be true, let's look at the questions that are up for release (only specific questions on the course evaluation are released to the student body).
There are 9 released questions:
- Approximately how many hours per week do you spend working on homework, reading, and projects for this course: 0-2, 3-5, 6-9, 10-14, or 15+ hours/week.
- Compared to other courses at this level, the amount I have learned in this course is: less, about the same, more, I have not taken other courses at this level.
- Compared to other courses at this level, the difficulty of this course is: less, about the same, more, I have not taken other courses at this level.
- I would recommend this course to other students: Yes, No.
- I would recommend this instructor to other students: Yes, No.
Rate your instructor in terms of the following characteristicsOK, do these questions yield a more complete picture of the course and its teacher? and is this information only available from these surveys? The answer to question 1 is 'yes and no, but really no' and the answer to question 2 is 'no'.
- Is approachable: Agree, Somewhat (SW) agree, SW disagree, disagree, NA
- Makes effective use of course readings: Agree, SW agree, SW disagree, disagree, NA
- Creates worthwhile assignments: Agree, SW agree, SW disagree, disagree, NA
- Has a reasonable grading system: Agree, SW agree, SW disagree, disagree, NA
I want to deal with the second question 'is this information only available from these surveys?' first. The short answer is no. As noted in the article many students use Rate My Professors as a source of information. The director of the MSA's University Policies and Student Concerns Committee noted that 'Rate My Professors isn't a sufficient tool to learn about teachers' and that it 'deals with trivial items like "hotness."' Well here's what Rate My Professors obtains scores on:
- Overall Quality
If I compare the two evaluation systems
Overall Quality = Amount I learned, Recommend class/instructor, worthwhile
Helpfulness = Recommend instructor, Approachable
Clarity = Recommend instructor
Easiness = Hrs/week, Difficulty, Reasonable grading system
Hotness = Approachable?
|Rate My Professor data|
One of the best professors I've ever had. This is NOT an easy class and there is a lot of work, but I have never taken a more worthwhile course - I learned critical thinking skills that I'll use for the rest of my life. As a bonus, Lorax is incredibly funny and really cares about his students and his eukaryotic microbes.
This man is useless and biased. Class should be renamed "Lab Techniques in Eukaryotic Microbiology."
These are the two comments related to an advanced course I teach along with the scores they gave me in three scorable categories. I would argue that both comments provide some amount of information about the course that is helpful to students and is not included in the university survey. The first comment is positive about me and the course, but clearly notes that it 'is NOT easy'. Also, I'm apparently some kind of comedian (cue Joe Pesci). The second comment is negative although the two statements are not logically linked or correct. I am not useless, at the very least my body is serving as host to trillions of microbes. I am biased, as is everyone, so that's not too helpful either. We could use some context here, but I expect student #1 did well in the course and student #2 did poorly. This latter hypothesis is supported by their inability to make a clear argument. Regardless, a prospective student learns something about the course from the latter comment in the second sentence, which is that we do discuss a lot of techniques used in molecular biology (not eukaryotic microbiology as the student suggested). Again this information is not released on the student surveys.
In my opinion, the reality is that the university survey data in no better than Rate My Professor data. As an aside, I must point out that student course evaluations are of limited value. Numerous studies have demonstrated that a student evaluation is correlated with the student's expected grade in the course and not teaching effectiveness. One of the better studies I've seen takes this correlation to it's logical conclusion:
From a policy viewpoint, the findings of this study are important. As an increasing number of universities use student evaluations of teaching in administrative decisions that affect the careers of their faculty, the incentives for faculty to manipulate their grading policies in order to enhance their evaluations increase. Because grading policies affect student enrollment decisions and the amount students learn in their courses, the ultimate consequence of such manipulations is the degradation of the quality of education in the United States.
So here is one strong reason I did not allow my evaluation data to be released. I think student evaluations are by and large inappropriately used and are of limited value in both positive and negative directions. (I'm getting a sense of deja vu, oh right.) This is why I said 'Yes and No, but really No' in response to the question 'do these questions yield a more complete picture of the course and its teacher?'
It is interesting that there is no consideration that students may in fact talk with each other about courses and instructors. (By talk I mean text, tweet, or otherwise interact in a non-physical forum.)
Two closing tangential points:
1. It was noted in the article that a professor thought the percentage of those who release the data is 'disappointingly low'. However, there is no indication in the article for why this professor thinks it is disappointing. This professor also suggested that the percentage was low because faculty have to opt in and suggests that the system should be changed to force faculty to opt out. I find this to be a cynical and backhanded approach to the problem.
The professor may think that faculty did not realize they had to opt in, but really wanted to. His approach would treat that problem, but that indicates 90% of the faculty are not that bright.
The professor may think that faculty are too lazy to check the box on the form to opt in, but really want to. His approach would also treat that problem, but that indicates 90% of the faculty are lazy shits.
The professor may think either of the two above choices are correct, but really the faculty do not want to release the information. His approach would then get those too stupid or lazy to opt out. This increases the numbers but goes against the wishes of the faculty.
2. All the people interviewed for the article who want faculty to release their student evaluation data were at a loss for why faculty do not release it. Here's a thought. Since 90% of faculty do not release their data, maybe you could fucking ask some of your colleagues/faculty members their reasons. Maybe they are too lazy or they simply forgot to check the box. But maybe, just maybe, some faculty think these evaluation are extremely poor resources to assess the quality of a course/instructor and do not want to contribute to the administrative mind-set that student course evaluations yield useful information regarding much more than the students' predicted grades.