Caveat: Venter

Think about all of the things that make your brain itch. These are mine.

Monday, October 17, 2005

On Student Evaluations

Every semester, it seems, my students greet evaluation day with as much glee as I. In three and a half years, my students have never failed to give me the two things I most appreciate in evaluations: generally strong marks and one or two areas that could use some (relative) improvement.

I do my best to indicate to my students my interest in honest, even harsh, evaluations so that I might improve in subsequent semesters, though I sometimes wonder if that does not, by itself, increase their respect for me and thus increase the scores they give. There is no way for me to test that accurately.

Scrivener and his commenters have noted that students are not, perhaps, the most qualified evaluators of educators, and I have to agree. Still, they know whether or not they are improving, and that is what seems to guide my students when they compose their comments at the end. There is, however, one major problem with the evaluation process, and I would love to see them fixed (note that some schools already employ what I am about to suggest, but I do not work for those schools).

The aggregate scores from student evaluations should be made available for student review. While such sites as exist for students to plug in for public consumption their opinions of faculty, not all students know or care about the site. Worse, the bulk of reviews seem to be, as noted in this piece from Wired News, either rants or raves, entirely missing the middle.

If schools made aggregate information available from mandatory on-campus reviews, everyone, including hiring and tenure review committees, could see what past students have said about instructors and professors. We educators should welcome the exposure, understanding that students may sometimes harbor negative sentiments for, at time, unfair reasons. When we look, however, at those educators who win awards for excellence in the classroom (as determined, ultimately, by faculty and administration), we often find that they are the same ones whose students give high marks. I have no fear of openness, and I should hope none of my colleagues fear it either.


At 9:38 AM, Blogger RussianViolets said...

I agree with you, Andrew, but I work in a place where academic freedom is very touchy, and I imagine that many colleague would object on that ground. Maybe some middle ground? A subscription service or something with voluntary participation?

At 2:38 PM, Blogger Andrew Purvis said...

I would not suggest that we include written comments. The numerical results would be fine. I wonder, though, how a school might implement a voluntary system. Would someone who had opted in be allowed to opt out later if his or her numbers declined?

I do worry that students, when they can choose to provide feedback on such sites as, often do so out of extreme motivation: either they loved that course and professor or they hated it. This skews numbers tremendously. However, I am curious why any student should choose to rate someone if the student never has access to the aggregate results. Perhaps I always assumed (bad Andrew!) that I had that access as a student, but it seems reasonable to me that students should have access to that data.

Does this pose a risk? Sure. It may skew enrollment slightly, but that may also work out for the better. Consider that personality matches between students and professors make a difference in student performance. Some students crave classes with the tougher faculty, thriving on the challenge. The flip side, of course, is that the less motivated students or those who fear the subject may seek out the "light touch" types, which may in turn lead to grade inflation among the population most in need of a good whacking in the gradebook.

It's a tough one.

At 8:55 PM, Blogger Khara said...

Post them! That would be super, super fun!

At 10:13 PM, Blogger Andrew Purvis said...

I would have to find them and then type in a LOT of data. Consider that I would be looking at close to twenty classes with data spanning, in most cases, more than a dozen questions, and the data get really complex.

At 4:35 PM, Blogger Ahistoricality said...

Perhaps, to avoid the extremist domination of sites like ratemyprofessor, faculty should routinely provide links to those site and encourage all students to participate.

Data I've seen suggests that turnout would still be very low, but it would help some.

At 4:35 PM, Anonymous Anonymous said...

Make course evaluations public? I don´t think it is a good idea. What about addressing the issue WHY students give good or bad comments? I assure you one thing: you´re getting raving comments, your class is easy. Read a recent article on student evaluations in the journal of the AAUP, and you will realize what a failed system students evals is. So many collegues of mine in tenure-track are telling me that they see themselves obliged to dumb down their classes, because of their fear of not getting tenure. Wow, what a great thing for education this is! Let´s make it easy, behave like a clown in front of the class and pass everyone, and get the maximum ratings in this stupid thing called student evaluation!
You say: "Still, they know whether or not they are improving, and that is what seems to guide my students when they compose their comments at the end." What kind of nonsense is this??! If you teach at an institution where your average student has the amount of self-awareness necessary to judge if they "are improving" especially being able to judge in a ridiculously short amount of time (14 weeks of a term we call a "semester") then you must be teaching in another planet. Don´t get infatuated with your high scores, start making your class harder and you will see that even if you were considered "hot" that will tunr into "hot... NOT!"

At 2:02 AM, Blogger Andrew Purvis said...

I love this kind of rant. I really do. Here is someone too afraid to sign a name, someone unaware of what my classes are like, yet still someone willing to assume that my courses are easy.

I got a reputation, which I heard about from students who had already passed under other instructors every class I taught, for being someone whose classes were best left to students who wanted to learn something. Some of my students have liked me, but it is a non sequitur to claim that that necessarily means I did not teach well. Indeed, many of my colleagues commented that I gave my students more difficult assignments than they would have, and even students who did not pass my classes came back to try again from me.

Yes, my courses must be a breeze. Walking through the halls and reading posted grades each semester, I was shocked by the high grades others gave by comparison.

Education need not be dull to be effective. Indeed, anonymous (or as many sites out there say, "anonymous coaward," though I expect one not likely to return and read this) might do well to look at research that suggests it is those classes that combine a variety of approaches with a solid foundation that have the greatest success.

It is interesting to note, in fact, that my best rave came from a student who had barely survived my grading system, pushing himself hard for a C. Three times. I did my best to listen to what he needed and provide assistance when he would meet me on campus, call my personal cell phone (which I gave out to all of my students), or email me with questions about assignments. When he needed to rewrite assignments, he did, always improving, though not always reaching the maximum rewrite value.

It is true that student evaluations, as they are currently constituted in most places, are poor and do little to address the reasons students rate as they do, but to claim that this shortcoming is a reason not to publish evalutions is incredibly shortsighted. I can envision, quite easily, more detailed evaluations that address these points adequately, and I pity those (such as the commenter above) who lack the brain power to conceive of such a change.

As to the "hot" bit, well...not a single pepper has ever been entered by my name on,despite my having taught at four institutions. I guess that wasn't it, huh?

At 7:16 PM, Anonymous Greg said...

The issue of student evaluations is certainly a hot topic. I disagree with the idea that evaluation scores or other information should be published or made available to the public. The purpose of the evaluations is not to give a professor a platform to brag or boast. Teaching isn't a competition. The purpose is to encourage improvement in the teaching standards. However, the current system of student evaluation is flawed. I know this from experience. I've taught for more than nine years and have seen a noticeable correlation to evaluation scored and student grades. When I first began teaching (I'm now ashamed to say) my classes were too easy. I was young, twenty-five, when I taught my first class, and I was overly concerned with my "performance" and took great efforts to win the admiration of my students. My evaluations were consistently at or near perfect. With experience, however, I became less concerned with my "performance" and more concerned with challenging and engaging my students to move beyond their familiar, often comfortable, academic standards. The classes became less easy, and the grade inflation deflated. I gave less A's and B's, but because of my higher standards, my students showed a noticeable improvement in their academic performance. However, my evaluation scores were less than perfect. Many of my colleagues, especially those new to the teaching game, boasted higher evaluation scores. But they also admitted to inflating grades, some never giving below a B-, even to students who submitted incomplete or substandard work. From experience, I know the "tricks" to raise my evaluation scores. However, I believe providing students with honest feedback and an honest grade is far more important than inflating my own ego. So, while I appreciate your opinion, I strongly disagree with the idea of making student evaluations public. Unfortunately, the current evaluation system is flawed to reward "performance" over academic integrity.

At 8:07 PM, Blogger Andrew Purvis said...

Greg, I appreciate your comments. My reason for publishing evaluations springs from my belief that students can find the better instructors and, on the whole, will seek those who will provide better instruction. Faculty evaluations, on the other hand, are a superb way of reinterpreting student evaluations, and I do not think these should be published.

I had fairly high student evaluation scores, but the correlation to high grades was not there. I got top marks from students who worked to come up to college level, managing perhaps a C after rewriting assignments. I have had students get D's and F's from me, only to seek me out again because they were improving as writers.

I suppose there is some correlation between grades and evaluations, but in my experience the openness of the grading system, equity in implementation of stated policies, and the willingness to work with students (I sometimes took calls at home on Saturdays to answer questions) do more than handing out inflated grades. If anything, I leaned to the conservative side in my grading, and the only students who ever complained were those caught plagiarizing.


Post a Comment

<< Home