Course Evaluation Power Grab

Photo courtesy of Staff Senate

BY DOMINIC PINO, STAFF WRITER

At the end of the semester, professors give students final course grades, and students give professors final course evaluations. Some professors want to change that.

On Wednesday, April 3, the George Mason University Faculty Senate will vote on proposals that weaken students’ voices in course evaluation. The recommendations from the university’s Effective Teaching Committee include:

  1. Removing the overall rating questions from the current Course Evaluation Form that students use to rate courses at the end of a semester.
  2. Switching from paper to online forms, which the recommendation acknowledges will reduce the response rate.
  3. Increasing reliance on professor-based forms of evaluation like peer review and self-assessment.  According to the recommendations, “Some suggested weights might include 50 percent for peer review, 30 percent for self-assessment and 20 percent of the total evaluation score based on the results of the Course Evaluation form.”
  4. Adjust course ratings for course load and type.

Let’s go through these power grabs one by one.

1) Removing the overall rating questions. The rationale for removing items 15 and 16, “My overall rating of the teaching” and “My overall rating of this course,” from the Course Evaluation Form is that it would support Mason Strategic Goal 8: Diverse Academic Community.  

The recommendations say: “Research has shown that using an overall measure of student satisfaction (such as Items 15 & 16 on the current Course Evaluation Form) results in negative bias toward minorities and females. By removing the bias inherent in using a single number for high-stakes evaluation, Mason can improve the accuracy and fairness of faculty evaluations and improve retention of minority and female faculty.”

The recommendations do not cite any sources (imagine if a student did that!), so I looked for research on course evaluation bias. I could not find any evidence of overall course ratings being biased against women. There is one 2007 study of 190 professors in the College Student Journal that found, “On the two global items (overall value of course and overall teaching ability), student ratings were very good for White faculty and faculty who were identified as ‘Other,’ and good for Black faculty,” which indicates slight bias against Black professors.

However, that is far from dispositive. A 2010 article in the Journal of Diversity in Higher Education includes a very thorough literature review of over 30 studies and finds evidence on both sides. The article says, “Although some work found that women receive less favorable SETs [student evaluations of teaching] than their male colleagues … other studies found no effect of faculty gender,” and, “The interactive effects of race and gender on SETs are unclear,” and, “The case of racial minority female faculty is also ambiguous.” The article seems to provide more evidence for racial bias, but it also says, “Two studies … found no evidence of racial bias in the overall evaluation of Black versus White faculty.”

Research aside, it is quite demeaning to make course evaluations a race/gender issue. It is demeaning to minority and female professors to imply they can’t hold their own in their profession, and it is demeaning to students to implicitly accuse them of racism and sexism in their course evaluations.

Additionally, it takes some nerve for the recommendations to bemoan “the bias inherent in using a single number for high-stakes evaluation,” considering students face a single letter on their transcript at the end of a semester.

That complaint is even more bizarre when you consider the actual average course rating. Course ratings are on a 1 to 5 scale, so one would expect the mean to be around 3. Using the average course rating and average teaching rating for 11,884 courses from the last three semesters (Fall 2019, Spring 2018, and Fall 2018), available on the GMU course rating website, the mean teaching rating was 4.49 and the mean course rating is 4.30. The ratings are skewed so high, that any teaching rating below 3.40 and any course rating below 2.97 is a statistical outlier.

It seems like the single number is working out great for your average professor, so who will benefit from this change? Bad professors. Without a single overall rating to point to, it will be much harder to determine how well a professor is actually doing. This is not a race or gender issue. This is a flagrant attempt to suppress students’ voice in professor evaluation by denying them the ability to rate both courses and teaching overall.

2) Switching from paper to online forms. The only reason given for the change is that the Office of Institutional Research and Effectiveness (OIRE) recommended it “due to its cost-effectiveness.” No evidence is given in the recommendations to support that claim.

I spoke with Dr. Gesele Durham, the associate provost for the OIRE, and inquired about the costs of administering the Course Evaluation Form. Based on the costs of envelopes, labels, scantron forms, and maintenance for the scanner machines, she provided a conservative estimate of $19,000. That does not include a full-time staff member who oversees the Course Evaluation Form process year-round and additional staff during peak times, she said.

That full-time staff member is Course Rating Specialist Robert McDonnell. He is the guy for course evaluations—according to the OIRE website, he “distributes, collects and scans the paper version of Mason’s course ratings survey. He also runs the programs that analyzes the data that produces the final ratings reports.” He has been with the OIRE since 2000 and certainly knows more about this topic than anyone else.

Despite that specialized expertise, Mr. McDonnell told me he didn’t know a lot about the specific proposals the Faculty Senate will vote on because he was never consulted when the recommendations were being made. He immediately expressed concern about the lower response rates that online forms would cause. When I asked him what changing to online forms would mean for him, he smiled and said, “It would certainly be better for my back,” but was personally concerned about changes in his job and the transitional costs to learning a new system.

When I asked Dr. Durham about the comparative costs between paper and online, she said, “Either way, it’s an investment.” She was unaware of any cost-effectiveness study being done by the OIRE (although she just started at Mason a few weeks ago). Her concerns were more focused on the age of the scanner machines: “I don’t even know if they make them anymore,” she said with a laugh.

The recommendations do include ideas for incentives to increase the response rates with online forms, but why bother with the change if no one knows how cost-effective it actually is? And even if the Effective Teaching Committee’s claim about cost-effectiveness is true, it is not a great reason to weaken students’ voices considering the university’s budget is $1.06 billion for fiscal year 2019, and the amount in question is in the thousands. Updating the technology seems reasonable, and the university should invest in the best way to get student feedback. Since online forms would have lower response rates, they are clearly not the best way.

3)  Increasing reliance on professor-based forms of evaluation like peer review and self-assessment. The suggested weights of 50 percent for peer review, 30 percent for self-assessment and a measly 20 percent for student course evaluations are about as blatant a power grab as you will find. Professors get 100 percent of the say in students’ course grades, but students only get 20 percent of the say in professors’ course evaluations? These are only suggestions and are not binding on departments, but why even suggest such numbers unless they are intended to be used?

Students are the ones in the classroom all semester. Students know how the class went, whether the teaching was good, whether the readings were useful, whether the professor was respectful, etc. Professors can’t possibly rate other professors fairly because they aren’t in the class.  

As for self-assessment, come on! After a whole semester of assignments, the Course Evaluation Form is when the students finally get a chance to give their professors a grade. I think the professors can handle it.

But apparently they can’t. The recommendations state, “The Committee’s research regarding faculty satisfaction with the Course Evaluation Form indicated overwhelming dissatisfaction with the form itself, as well as how the form is used in renewal, promotion, tenure, and salary decisions.” I’m willing to bet there are plenty of students who have indicated overwhelming dissatisfaction with the final grades on their transcripts, but they don’t institutionalize their complaints and fundamentally change the grading system.

4) Adjust course ratings for course load and type. The recommendations say, “Because in any given year some faculty teach more courses than others or teach required courses with large enrollments, or online courses, evaluations of teaching should be proportional to each instructor’s teaching load and course characteristics.”

Students don’t get a break on their grades because they are taking more credits than other students or taking harder classes than other students. Why should professors get a break on their course evaluations because they teach more classes than other professors or teach harder classes than other professors?

All this overlooks the monetary factor. We, the students, are the customers.  We pay the university and forego the opportunity to make money from working in order to get an education and gain credentials. They, the professors, are the producers.  They get paid by the university to teach students and do research. The recommendations express concern over “the high-stakes nature of faculty evaluation for career decision-making,” while giving no thought to the high-stakes nature of final grades to their customers, e.g. getting into graduate school, getting into law school, graduating on time, meeting prerequisites, etc. Our final grades are a big part of determining whether our four-year investment of thousands of dollars was worthwhile.

Professors: you grade us, so it’s only fair that we be allowed to grade you.

Prof. Lorraine Valdez Pierce, the chair of the Effective Teaching Committee, declined to comment until after the vote.