Professors discuss getting graded by students

This story was originally published in the April 27 print issue.

The arrival of May heralds a familiar ritual for Mason students. Amid the stress and sleepless nights synonymous with final exams, they are required to fill out teacher evaluations.

It takes about 15 minutes to complete the standardized forms, nothing compared to long hours of studying, but their significance should not be underestimated, say faculty and administrators.

“Students are one really important voice in how well a faculty member is doing,” said Kim Eby, associate provost for faculty development and director of the Center for Teaching and Faculty Excellence. “If I’m teaching my course, I need to know from my students what’s working well… what strategies or assignments I might be assigning they don’t feel are helping their learning.”

Of particular interest to teachers are the written comments, which allow students to give more personal, in-depth feedback.

“You get a range of comments – some of them are just disgruntled, some of them are ‘I adore you,’” said Tamara Harvey, an associate professor of English. “Those really sort of critical comments that have specific suggestions and talk fully about what worked and what didn’t, those are the best. But it also requires students caring enough to sit back and think this is worth fixing, and this might be one way to make it better.”

Faculty recognize that students approach the evaluations differently, investing variable levels of effort and attention and applying their own values, attitudes and perspectives.

“I kind of pay a little more attention to the numbers because there are usually very few comments,” said Michael Malouf, director of the English department graduate program. “You want to have something that’s consistent over time, with an awareness that not every student completes the evaluations, and they don’t put as much thought into it as teachers probably do.”

Whereas the comments are kept strictly confidential, seen only by individual faculty members unless submitted for review, the numerical data is available to everyone at Mason. Students can access records of evaluations dating back to 1998 through the Office of Institutional Research and Reporting website.

An arm of the provost’s office, the Office of IRR is responsible for administering and processing the evaluations. After the forms are turned in, they get scanned into a computer system that generates a list of instructor and department averages from the scores.

“It’s a big job every semester to get [the evaluations], scan them and send them out again,” said Robert McDonnell, the office’s course rating specialist. “In the fall, we have to have them done by the time spring semester starts, so we have to get them done fast. In three weeks, we have to scan 4,000 course evaluations.”

Not long ago, online evaluations were introduced to supplement the traditional paper forms. Helen Wu, webmaster for the Office of IRR, wrote the program for these and monitors it on a daily basis.

Once the analysis is complete, the evaluations are re-packaged and sent to the various department chairs.

“I think students sometimes wonder whoever looks at those,” Eby said. “And the answer is, many people look at those. The department head or the department chair will look at those. Oftentimes, the Office of the Provost looks at those. So they are pretty serious evaluations and get taken very seriously.”

For the most part, teachers are left to interpret student critiques at their discretion. Particularly low scores, however, will elicit an investigation by the dean and department chair.

“There are lots of reasons someone could get a low score in any given semester, so I don’t automatically assume because someone has a bad evaluation score that there’s a problem,” said Deborah Boehm-Davis, the College of Humanities and Social Sciences dean. “Usually I would talk to the chair and I would say, are there extenuating circumstances? Do you know that this is a known problem?”

If a faculty member receives poor evaluations over a long period of time, a remediation plan is developed, and he or she will be referred to the Center for Teaching and Faculty Excellence, which provides training and support. Chronically unsatisfactory performance can also affect salaries and damage term faculty’s prospects of renewal.

The extent to which student evaluations can be trusted is a topic of heated debate.

Recent studies by Northeastern University and North Carolina State University suggest that they are heavily influenced by gender biases. Women not only tend to earn lower ratings than their male counterparts, but are also routinely described in loaded terms, like “bossy.”

“I’ve found that students have different expectations for female faculty members than they do with male faculty members and that comes out in the teaching evaluations and classroom interactions,” said Shannon Davis, associate director of the undergraduate program in sociology. “Students expect female faculty members to be nice and motherly, and when we aren’t, we’re penalized. Whereas when men are warm, it’s such a surprise that they’re rewarded for it.”

Some find the questions themselves inadequate and too vague to be genuinely informative. Kris Smith, associate provost for the Office of IRR, thinks they would be improved by implementing different versions for different types of classes. Lab evaluations, for instance, would be separate from those for lectures.

The problem, she said, is that “it’s such a labor-intensive process. We would really have to go completely online for my office to handle the kind of changes that I think would be good.”

It is not unheard of for professors to distribute their own evaluations in addition to those mandated by the university. Midterm evaluations are particularly common.

“I think those are often more meaningful, in part because the instructor isn’t thinking about the larger institutional context, so she’s asking the questions she really wants answers to,” Harvey said. “Students are invested in it because it’s change that’s happening in that classroom; at the end of the semester, it’s like, you survived, now you’ve got to go somewhere else.”

Student evaluations are just one element used to judge faculty performance. All academic units conduct annual peer reviews, the primary basis for determining salary increases, according to the faculty handbook.

Although the exact procedure differs between departments, peer reviews normally involve classroom observations as well as surveys of teaching materials, including syllabi and key assignments.

The university also looks at faculty research, service and, if applicable, administration. The weight of each component depends on one’s position and teaching load. Tenure-track faculty typically teach two courses a semester and are expected to contribute to their field outside the classroom, such as by producing original scholarship. Term and adjunct faculty, meanwhile, are evaluated on solely their teaching.

Mason’s recent shift toward a more research-oriented model puts extra strain on faculty, many of who already struggle to balance time commitments.

“We’re all passionate about our scholarship,” Davis said. “But again, given that the university is relying on part-time and adjunct faculty members, many of who would like to have research agendas but don’t have the resources to make that happen, we’re really not doing ourselves any favors.”

In general, though, teachers welcome the opportunity to engage with current scholarly knowledge. While Harvey acknowledges that faculty seldom gain raises or promotions based on their teaching alone, she views research as fundamental to her effectiveness in the classroom rather than as a distraction or burden.

“One thing that happens is you bring in things you’re thinking about in your own writing to class,” she said. “In my experience, there’s something interesting that goes on when I’m struggling with an article or trying to work through a new idea and I’m talking to students at the same time about their own wrestling with their ideas. There’s a lot of synergy that goes on.”

In the end, faculty agree that while student evaluations have their uses, they also have plenty of limitations. On occasion, they can inhibit creativity and growth.

“It can be intimidating to try new things, particularly if you fail and students say they didn’t really like it and you get penalized,” Eby said. “That can sometimes discourage a very well-intentioned person from trying to do something different because it can be perceived as risky.”

Carla Marcantonio, a film and media studies assistant professor, doubts that it is possible to create a definitive measurement of quality teaching because the process is inherently subjective.

“Teaching is really about learning your way, figuring out how to have that rapport in the classroom,” she said. “It’s not a mechanical thing. As cheesy as it sounds, I think it’s more of an art form.”

Photo Credit: Johannah Tubalado