Response to poorly argued opinion on student evaluations

This week the campus paper published an opinion piece by Harlan Hansen, professor emeritus, College of Education and Human Development entitled 'The missing factor of course evaluation discussion' with the subtitle 'The University should use its best faculty to teach and improve others.' I have commented previously on the issue of student evaluations and release of information to prospective students. Despite a previous attempt to have student evaluations released, which failed by a large margin, the proposal will not go away. Basically, I think the administration and associated faculty who want the information released should simply mandate that the information be released and send a big 'and fuck you too' to the 90% of faculty who do not want student evaluation information released. Otherwise, it seems like we will just keep discussing and voting on it until the vote comes out the right way.

Regardless, I want to rant about this opinion piece for several reasons.


First: the first paragraph or as I like to call it, holywhatthefuck!

When I arrived as a faculty member of the University of Minnesota in 1968 I remember a publication that rated course instructors. A few years later, I believe, it suddenly ceased publication because of faculty requests, I assume. Forty-three years later, the request for that information by students is still a nagging question.
Can you find all the logical fallacies? So in 1968 there was a publication that ranked course instructors. I will accept this position at face value, but I have some questions: was this information disseminated to the student body and if so how? Was this information disseminated to the faculty as a whole and if so how? Was this information used by students to help decide which classes to take? How were course instructors rated? Did this information rate every course and every instructor, including non-faculty instructors (I'm assuming some courses/labs were taught by graduate students in 1968, though this may not be the case.)? I'm not sure if Dr. Hansen realizes this or not, but technology and dissemination of information is fundamentally different in 2014 than it was in 1968.

Then we get to the second sentence, which includes both 'I believe' and 'I assume'. I cannot help but wonder what kind of academician Dr. Hansen was. Let's accept that this publication existed and ceased publication in the early 70s. Why should we assume that it ceased suddenly (did you hear the ominous music just then?) because of faculty requests? Dr. Hansen simply assumes it. Here are several other possibilities: maybe no one went to the library to obtain this voluminous publication to help choose courses or for any other reason, maybe it was out-of-date by the time this information was published (remember this was before computers were collating all the information via scantron forms and then imported into excel spreadsheets for rapid organization), maybe the costs associated with the publication were not off-set by the usefulness of the publication. See there are three other possible reasons without even trying. Your assumption carries no weight.


Finally, we get to the last sentence, which has little to no linkage to the previous sentences. Have students been requesting this information for 43 years? Is it really still a nagging question? In 1983 there was an outcry for instructor rating information, even though that information wasn't actually collected and therefore didn't exist? A student who turned in  a paragraph like that in my classes would not fare well. But alright, let's assume an editor took out all the cohesion in the introductory paragraph setting up the issue to be addressed.


Second: unsubstantiated claims or as I like to call it, pullingshitoutofmyass, I assume.


Consider the following points made by Dr. Hansen:

"students say they want information that will lead them to more interesting and effective professors. Second, faculty who were quoted in the news minimized the students’ requests as wanting easy courses with high grades by instructors who tell good jokes."
I'm sorry but isn't an instructor who tells good jokes generally considered more interesting? Regardless, I have to concur that many, not all, students would rather have an easier course on a topic than a more difficult course on that topic. I could be wrong, but a slightly earlier opinion piece published in the campus paper seems to support my position.
"relative to my years of experience at the University, ratings of faculty instruction do not change over the years." 
I would not be surprised about this, but data please. Also, how the hell does he know? Didn't this bible of instructor ratings stop being published in the early 70s? Maybe he was department head and saw the student evaluations (when we actually had them), which would raise the question, why didn't he provide training for his ineffective faculty?
"“A” and “B” instructors have no problem sharing their ratings
Again data please. Hell, I'll even provide a data point, on my student evaluations using a 6 point scale (6 being being the top score), I fall well above 5 in almost every category every year. In those remaining categories I still fall above 5 every year. So, does this make me an 'A' or 'B' instructor? It seems like it should, and if so count me as an instructor who has a problem sharing my ratings.

Third: problem solving. 

By establishing the problem using holywhathtefuck and pullingshitoutofmyass approaches, Dr. Hansen then proceeds to assign blame. See it's not just the ineffective instructors, it's an administration problem. (Again I want to stress we have never defined effectiveness or established criteria to quantify effectiveness other than student evaluations, which are best correlated with students' expected grade.) And now we get to the solution:
"The president of the University should charge deans and department heads to put in place programs that can help all instructors improve over time."
Personally, I think these programs are useful and important. However, I wonder where the resources are going to come from for deans and department heads to do this. Programs do not come from the vault of readily available no expense resources. This solution also raises the question, why don't faculty development programs exist already? The answer is that they do, I have attended several. I wonder when Dr. Hansen retired such that he is unaware of them. Admittedly, these programs are voluntary, but they do exist.

Of course Dr. Hansen does have a remedy to this apparent lack of teaching development:
"The key factor is assigning current colleagues who have demonstrated quality teaching skills to share and demonstrate with those in need. While this may appear threatening to individuals, it establishes a community of scholars within each unit where, eventually, everyone can share positive techniques with each other."
Yes. because nothing rewards successful teaching like getting more work and responsibilities to train and manage the ineffective instructors. Don't forget, these are the same ineffective instructors who really don't want to get better as Dr. Hansen noted above when he states that faculty instruction does not change over time.
My opinion of the opinion: from here.

I am still surprised by the whole student evaluation movement. We have no good data suggesting that student evaluations gauge effective instruction, some studies do suggest this but many others do not. I have heard from colleagues that students simply want more information about a course and god forbid they go to some commercial site like rate my professor. Well I am all in favor for more information as long as that information is valid for what you are attempting to learn/show. Student evaluations do not, I repeat not, seem to correlate with effective or quality teaching, so what information are the students receiving about a specific course/instructor? The student evaluation is being changed to ask the following questions, which are relatively minor changes in wording compared to the current evaluation:
1. The instructor was well prepared for class.
2. The instructor presented the subject matter clearly.
3. The instructor provided feedback intended to improve my course performance.
4. The instructor treated me with respect.
5. I would recommend this instructor to other students.
6. I have a deeper understanding of the subject matter as a result of this course.
7. My interest in the subject matter was stimulated by this course.
8. Instructional technology employed in this course was effective.
9. The grading standards for this course were clear.
10. I would recommend this course to other students.
11. Approximately how many hours per week do you spend working on homework, readings, and projects for this course?
   • 0-2 hours per week
   • 3-5 hours per week
   • 6-9 hours per week
   • 10-14 hours per week
   • 15 or more hours per week
Of these questions, only those in blue are proposed for release to the students. Questions 7 and 10 seems to provide useful information on whether the student liked the course or not.  Question 8 is irrelevant to the discussion of instructor rating for the most part. Questions 6 and 9 may provide insight into the instructor's effectiveness and fairness. Question 11 is the great equalizer. If two sections of the same class are different here, which do you think a student would gravitate towards? This is not minimizing student concerns, I would rather take a course that required less work too other things being equal. 

You may be asking, 'Why the hell aren't questions 2, 3, and 5 being released?' Good question, as these seem key regarding a student's ability to decide which courses they want to take. Courses do not exist in a vacuum, without an instructor we might as well attend google university and write papers on vaccines and autism. Those questions are not being released because they may reflect specifically on an instructor. (Duh!)

Of course we must consider what ever will a student do without online released student evaluations? I mean what has happened over the last 43 years! If only there were some way one student could relate information to another student about a course. Some form of communication, I don't know, like texting, or tweeting, or posting to any number of social media sites, fuck maybe they could simple open their mouths and have words come out in the direction of another student's ear. If only. Sadly, I doubt our students are even aware of these modes of communication.