Course Evaluation Changes Leave Students Wanting More

Photo courtesy of VijayIyer

BY DOMINIC PINO, STAFF WRITER

On April 3, the Faculty Senate endorsed the 14 recommendations of the Effective Teaching Committee to change the course evaluation process. Two of the three motions passed unanimously, and one had one dissenting vote. As I wrote last Tuesday, the proposed changes weaken students’ voice in the course evaluation process.

The votes express the sentiment of the Faculty Senate to the provost’s office, stating the Faculty Senate concurs with the Effective Teaching Committee’s recommendations. The meeting was standing room only for visitors, most of whom were faculty and staff. (I must say, it was quite surprising to see so many employed people with nothing better to do at 3 p.m. on a Wednesday.) President Cabrera’s slot on the agenda went long due to lengthy questioning, and the votes on the course evaluation changes were rushed at the end of the meeting due to time constraints.

During the meeting, Professor Lorraine Valdez Pierce, chair of the Effective Teaching Committee, said the recommendations are the result of lots of work. “We spent hours, days, weeks, months, and years, five years, on this project,” she said.

Aside from a discussion on Wi-fi infrastructure that bordered on parody, (there were concerns that if all 150 students in a lecture hall class were using an online evaluation form at the same time, it might crash the Internet), the debate on the recommendations was uneventful.

Dr. Pierce initially agreed to comment for this article when I spoke with her after the Faculty Senate meeting, but then declined to comment over email until the committee meets again on April 10. However, I was able to speak at length with Professor Keith Renshaw, chair of the Faculty Senate, scholar and gentleman, on the phone the day after the meeting to get his input.

One of my concerns with the recommendations was the complete absence of citations. Dr. Renshaw is not on the Effective Teaching Committee and did not write the recommendations, but he was able to email me a lengthy list of references on course evaluations. I am now confident that the recommendations were grounded in relevant scholarly literature. I am also confident that if a student turned in a reference page after a final report was due, it wouldn’t fly.  It’s not unreasonable to express skepticism at sentences that begin, “Research has shown…,” and cite nothing. In fact, it’s our obligation as critical thinkers to do so.

Dr. Renshaw does not see course evaluations as being a faculty equivalent of student final grades. “Evaluations are an assessment of someone’s performance of a job duty, as opposed to grades which are an assessment of learning and mastering information,” he said.

As such, he sees student evaluations as one of many ways to assess professors’ professional performance. Dr. Renshaw emphasized that peer review take into account professors’ expertise in both the subject being taught and in instruction generally.

That certainly is valuable expertise, but should peer review count at 50 percent, like the recommendations suggest? When I asked Dr. Renshaw about that, he said he didn’t recall that specific recommendation and would need to look more carefully at the report. However, he did say, “I can totally see someone saying that skews things too far in one direction.”

Photo courtesy of VijayIyer

When I asked about a different kind of expertise, namely the expertise of students who are in the classroom all the time, he said, “This is one of the tough parts, because students are there all the time, and we never want to discount that.”

I’m glad to know that is the case, and I hope the final product from these course evaluation deliberations weights student input higher. Being a student is the only thing you can do for fifteen or more years without anyone ever calling you experienced. College students have years of experience with teachers and know a good teacher from a bad one. That’s experience too, and it should be treated as such.

What do those experienced students think about these proposed changes? I spoke with multiple members of student government about this issue.

Sophomore Mackenzie Nelson, chair of the Student Senate University Academics Committee, told me, “As a student, I would like to know more.” While noting she could not speak for the academics committee at this time, she expressed interest in meeting with both Faculty Senate members and administration to start a dialogue on the course evaluation process.

Academics Committee Vice-Chair Sami Gibbs, a freshman, said she was specifically interested in the recommendations’ claim about disparities in evaluations for women and people of color. “A lot of these proposals are vague, and I would like to see the research and have discussions with faculty as this process continues,” she said.

While they could not say exactly what actions the Student Senate would take at this time, both Nelson and Gibbs agreed it was important to hold faculty accountable.

Senior Pat Grady, a grizzled veteran of Student Government who has served in all three branches, commented to me over email about his concerns over the proposals. He wrote, “What troubles me about the proposed changes is that it takes a faculty-centric approach. I fear that changing the course evaluations method will only diminish the student voice at George Mason. I hope that in the future the student experience is considered to a greater degree when making decisions about how we evaluate our professors.”

I also spoke with our newly-elected student body president, Camden Layton. “I wish there was more of a student voice in this process,” he told me. “I don’t think many of the moves are terrible, but I would like more student voice in working with the OIRE to develop these proposals in the future.”

Layton acknowledged that peer review provides valuable information about intradepartmental work that students might not always see. But he said the suggested proportions of 50 percent peer review, 30 percent self-assessment and 20 percent student evaluation aren’t right.  “50-30-20 are not the right numbers, and self-assessment shouldn’t count for more than the student view,” he said.

Students aren’t the only advocates for the student view. I spoke with Prof. Thomas Rustici (whose class I skipped to attend the Faculty Senate meeting) in his Buchanan Hall office about the proposed changes to course evaluations, and he said these proposals seem to be designed to protect professors from being held accountable.

“So much of higher education from tenure on down is designed to treat students as a means to an end,” Dr. Rustici said. “This would be another step in the wrong direction. Students’ voice about quality and competency should not be muzzled.”

Layton said he would work with Nelson, Gibbs and Faculty Senate Liaison Monet Ballard to determine a path forward. “It would be interesting to see a resolution on this expressing the student’s position to the provost’s office,” he said. “These are the kind of issues we brought up in the campaign that student government should be more active in.”

When I asked Dr. Renshaw about the prospect of students working together with faculty and administration in developing a new course evaluation process, he said, “I think that’s a great idea. That’s how it should be done.”

Agreed. Let’s make sure our voices are heard.