Response Rates and Accuracy of Online Course Evaluations

Raising response rates

Faculty and their attitude towards evaluation in the classroom make a huge difference in response rates. Would you like higher response rates? Better feedback? More substantive comments? You can make that happen. Here's how.

Worried about the accuracy or validity of the course evaluations system?

Online evaluations save money, lower staff workload, decrease the margin for error, preserve class time that would otherwise be spent on in-class evaluations, and allow quick data turnaround. Still, going online is a huge change from having paper evaluations. As a faculty member, you naturally care about the feedback your students have to offer, and want to keep the accuracy and volume of that feedback and high and quality as possible. Many faculty have questions about online evaluations, and wonder if online evaluations can retain the positives they’ve come to depend on with paper evaluations. Below, some common faculty concerns are detailed, along with some of the literature that speaks to those concerns.

Are online evaluations as accurate as paper evaluations?

Yes. There is a prevalent belief that paper evaluations will come closer than online evaluations, to a rating that most accurately reflects the quality of a faculty member’s teaching. If this was true, scores between paper and online evaluations would differ substantially. Burton et al. (2012) searched out studies in the field of evaluation, and determined that out of the 18 studies they identified as measuring differences between quantitative feedback on paper vs. online evaluations, 14 reported no difference between the delivery methods and 2 reported slightly higher ratings online. In their own experiment, Burton et al. (2012) determined that online ratings were significantly higher than those collected on paper evaluations. The greater body of the research obviously does not support the idea that moving to online evaluations will either skew ratings, or cause them to become more negative. Even in terms of qualitative feedback, studies tend to find that online formats garner more positive, and move useful comments than paper (Burton et al. 2012; Heath et al. 2007).

Won’t allowing “absentee” students to participate, lower my evaluation scores?

No. It’s true that by giving the evaluations in class, students that attend less often may be excluded more often. This is often seen as a positive, since these students are assumed to have less of a basis for evaluating a class, and are also assumed to be students who would evaluate a faculty member more negatively. The concern is that by giving this population of students a greater opportunity to evaluate, more negative feedback will be collected, lowering a faculty member’s overall rating.

First, research does not support the idea that class and teacher ratings are related to student attendance (Perrett 2013), so just because a student has lower class attendance, does not mean that they will necessarily score an instructor more harshly. Also, students with a higher GPA (presumably the “better” students) complete online evaluations at over twice the rate of students with a poor GPA (Thorpe 2002). Students expecting higher grades also evaluate at a higher rate (Adams and Umbach 2012). In the same study, mean SAT scores were higher for students responding to the survey; as SAT score decreased, so did the likelihood that a student would participate. Even if this were not the case, students expecting poor grades in a class are no more likely to score an instructor below the class mean than students expecting good grades (Avery et. al. 2006; and Thorpe 2002).

While this may allay concerns, there is be another reason for capturing the feedback from the population of students that comes less frequently, or begins to have spottier attendance at the end of the term; they may be in a unique position to help point out things about a course that a faculty member would definitely want to know. While positive reinforcement is always welcome, knowing why a student either failed to engage, or became unengaged from a course can show where to make tweaks and improvements that will benefit all students.

Will students give as much, and as quality qualitative feedback online?

Yes. When a faculty member gets their evaluations, they generally hope for comments, substantive feedback, and detail that will give them enough information to know whether a change in their teaching or course content is warranted, and if so, exactly what that change should be.

Contrary to popular expectation, however, paper evaluations do not offer a greater benefit than online in this area. In fact, the majority of studies show a higher percentage of students who respond to evaluations given online include qualitative feedback (Donovan et al. 2006; Heath et al. 2007; Kasiar et al. 2002; Laubsch 2006). The amount of online qualitative feedback is also greater than that in the paper evaluations. In research analyzing word count, studies find that qualitative feedback from online evaluations has exceeds that of paper evaluations, often by a wide margin (Burton et al. 2012; Heath et al. 2007; Kasiar et al. 2002; Hardy 2003; Hmieleski and Champagne 2000). Perhaps most importantly, several studies have examined the quality of the comments submitted through both formats (paper vs. online), and found that online comments were more substantive and informative, as defined by more words per comment, more descriptive text, and more detailed feedback (Ballantyne 2003; Burton et al. 2012; Collings and Ballantyne 2004; Donovan et al. 2006; Johnson 2002).

Don’t online evaluations have lower return rates than paper?

That is completely up to you. Studies comparing rates of online vs. paper evaluation find that online evaluations generally have lower response rates than do paper evaluations, barring incentives and interventions (e.g. reminder messages, rewards). How much higher the rates are is a matter up for more debate. In academic refereed papers published since 2005, where no incentives or interventions were explicitly listed in the paper, and there were paper comparison rates available, paper response rates averaged 9% higher than online rates. Studies outlined in refereed journals, however, are not the same as real life examples from universities using online systems across their campuses. If data from universities using online course evaluations campus-wide are compared, the difference between paper and online response increases slightly, to 10.8%, again with no incentives. Adding incentives can boost response rate, dependent upon which incentives or interventions are used, from 7-25% (Ravenscroft & Enyeart, 2009; Norris & Conn, 2005; Johnson, 2002).

University of Oregon uses a grade hold incentive and reminder notices. Our average online response rate is currently 78-79% (65% not including declines). It’s higher in courses where faculty make it a point to let their students know how to find the evaluations, that the students’ comments are valued, and how the data is used overall. (Read more about how to raise YOUR response rates.)

Is that better than our paper response rates? While response rates were not collected when we used paper evaluations, the sheer volume of evaluations collected has skyrocketed since going online. In Winter of 2007, only 32,000 Scantron forms were printed for that term’s evaluations. Last Winter, 84,728 evaluations were completed online.

Still, there are always people who insist that, prior to going online, they had a better response rate. But were all of those evaluations legitimate? In detailing its own process of moving to online evaluations, University of British Columbia made an interesting discovery that most institutions never consider. In an online survey, only students who are validly enrolled in a course can evaluate that course, and they are limited to evaluating one time. Several of the courses in UBC’s first few "test" terms, had response rates higher than 100% for their paper distribution. For comparison purposes in their paper, UBC simply reduced those figures to 100%, but it's interesting to be reminded that response rates may be artificially high for paper evaluations (University of British Columbia, 2010).

Will online response rates be high enough to have statistical validity?

Yes. What response rates are necessary to achieve statistical validity? Nulty (2008), looked at exactly this issue. He used and justified an 80% confidence interval for his calculations, and through a vast number of assumptions and corrections for bias, states that classes under 20 students need a minimum of a 58% response rate to be considered valid. Courses with greater than 50 enrollees can use 35% as their bar. Larger classes have even smaller acceptable rates. While Nulty outlines some cautions and has some confounding variables in his data, the overall conclusions are supportive of online evaluation response rates as acceptable statistically. In Fall of 2012, our average response rate, not including declines, was 65%. And that can go higher. What needs to happen for our response rates to climb? You make the difference.

I make a difference in response rates?

Yes. At institutions where evaluation is taken seriously by the administration and the faculty, students can feel that their feedback matters, and respond accordingly. The relationship between student and faculty member is highly personal and individual, and plays the biggest role in a student deciding whether to evaluate.

Many students surveyed believe that faculty do not take evaluations seriously, and do not make changes as a result of the students’ reviews (Marlin, 1987; Nasser & Fresco, 2002; Spencer & Schmelkin, 2002). In fact, when asked, very few instructors report having made changes in direct response to student evaluation input (Beran & Rokosh, 2009). If faculty value course evaluations, educate the students on how they are used, and emphasize to students that their input will be taken seriously, however, there is a positive effect on response rates (Gaillard et. al., 2006). Constructive, informative, and encouraging instructor-student engagement around the course evaluation process is very important in maintaining or improving response rates (Norris & Conn, 2005; Johnson, 2002; Anderson et. al., 2006; Ballantyne, 2003).

A Brigham Young University study suggests that improved instructor and student engagement had helped response rates improve from 40 per cent to 62 per cent during three pilot projects (Johnson, 2002). The same study also showed a strong correlation between the level of communication and response rate.

There tends to be an overall bias that response rates simply “are what they are”, but nothing could be further from the truth. In fact, faculty and their attitude towards evaluation in their classroom, make a huge difference in response rates. Would you like higher response rates? Better feedback? More substantive comments? The power to make that happen is in your hands – in fact, no one can make more of a difference in this area than you.

Here are some ideas to help you encourage your students towards evaluating. Some may work better for you than others, or work better with you personal style. Try them, and see which work best for you.

  • Early reminder – 2 to 3 weeks prior: While we already automatically send reminder messages to students during the evaluation period, one study (Norris & Conn, 2005) noted a great increase in student response rates when students were given an early notification that evaluations were approaching. A reminder at around 2 to 3 weeks before the term ended was found to be ideal, raising response rates an average of 17 %.
  • Reminders into term – check how students are doing: When you log into DuckWeb and click on “Course Evaluations” from the Main DuckWeb Menu, you are taken to a landing page that shows your evaluations in progress. If your classes aren’t submitting evaluations at the rates you’d like to see, remember to mention the evaluations to them in class, letting them know how important their feedback is to you. In Johnson’s 2002 study, where he followed up with non-responding students, 50% of the non responders reported having no idea that the survey was available to be taken, and another 16% forgot.
  • Make it an assignment: Many faculty are against offering credit for students to do evaluations. The good news is, you don’t have to! Making an evaluation an assignment, even with no point value attached, raises response rates 7% in one study (Johnson, 2002).
  • Give instructions: What’s the most common question the Registrar’s Office gets about course evaluations? How to find them! While there are instructions to finding the course evaluations in the emails we send out to students, many students are simply conditioned to automatically click “Student Menu” when they log into DuckWeb. Then they are frustrated, because they can’t find the evaluations menu. Mention that course evaluations are in the Main Menu, not the Student Menu. If they can’t find the link – they can’t evaluate. One study found that courses in which instructors demonstrated how to find and use the evaluations system had a 24% higher response rate than in courses with no demonstration given (Dommeyer et al, 2004).
  • Stress the importance of evaluation: Students are more likely to complete course evaluations if they understand how they are being used, and believe their opinions matter (Gaillard et al, 2006).
  • Detail how the University uses evaluation feedback: Many students don’t realize that their evaluations are looked at by all department chairs, and by promotion and tenure committees campus-wide. Let them know that this data is valued, and used, by University administrators.
  • Detail how YOU use evaluation feedback: One of the best ways to let students know that their opinion matters, and that you use it to improve your teaching, is to give them an example of how you’ve done so in the past. Share with the students some feedback that you’ve received in the past, and let them know the changes you made as a result. While it is likely valuable to let students know how the University uses their feedback, that’s not what their biggest concern is. Chen and Hoshower (2003) found that students consider an improvement in teaching to be the most important outcome of an evaluation system, followed closely by an improvement in course content and format. If the University listens, great! But what students really want, is to know that you listen.

That’s great, but what I really want is more detailed written feedback. How do I get that?

Simple. Ask for it! Remember that a higher percentage of students who respond to evaluations given online include qualitative feedback, (Donovan et. al. 2006; Johnson, 2002; Kasiar et. al., 2002; Laubsch, 2006; and Layne et. al., 1999), the amount of online qualitative feedback is also greater than that in the paper evaluations (Kasiar et. al., 2002; Hardy, 2003; Hmieleski & Champagne, 2000), and online comments are more substantive and detailed than feedback on paper evaluations (Donovan et. al., 2006; Johnson, 2002; Collings & Ballantyne, 2004; Ballantyne, 2003). All that’s left in that equation is making sure you get the feedback you’re most interested in hearing about. Are you trying out a new textbook this term, or did you add a new subject area in your lectures? Mention it in class when you’re talking about the evaluations, and let students know that you’d really like to hear how the material worked for them. Invite them to give you feedback on exactly what you most want to know about, and then demonstrate what type of feedback is most helpful to you. Let them know that “Great Professor,” is very nice, but not very helpful. Read them some examples of feedback that IS helpful. Show them what about the feedback was useful to you, and what helps you know how to make changes.

Questions?

Still have questions? Contact the Registrar’s Office at (541) 346-2935 or registrar [at] uoregon [dot] edu, and thank you for making our course evaluations online system as success!

References

Adams, M. and Umbach, P. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53: 576-591.

Anderson, J., Brown, G. & Spaeth, S. (2006). Online student evaluations and response rates reconsidered. Innovate, 2(6). Retrieved from http://www.innovateonline.info/index.php?view=article&id=301

Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic Course Evaluations: Does an Online Delivery System Influence Student Evaluations? Journal of Economic Education, 37(1): 21-37.

Ballantyne, C.S. (2003). Online evaluations of teaching: An examination of current practice and considerations for the future. In D. L. Sorenson & T. D. Johnson (Eds.), New Directions for Teaching and Learning #96: Online students ratings of instruction (pp. 103-112). San Francisco, CA: Jossey-Bass.

Beran, T., & Rokosh, J. (2009). Instructors' perspectives on the utility of student ratings of instruction. Instructional Science, 37(2): 171-184.

Burton, W., A. Civitano, and P. Steiner-Grossman. 2012. Online versus paper evaluations: differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1): 58-69.

Chen, Y. & Hoshower, L.B. 2003. Student evaluation of teaching effectiveness: an assessment of student perception and motivation. Assessment and Evaluation in Higher Education, 28(1): 71-88.

Collings, D., & Ballantyne, C. (2004). Online student survey comments: A qualitative improvement? Paper presented at the 2004 Evaluation forum, Melbourne, Australia. Retrieved from http://our.murdoch.edu.au/Educational-Development/_document/Publications...

Dommeyer, C. J., Baum, P., Hanna, R. W., and Chapman, K. (2004). Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations. Assessment and Evaluation in Higher Education, 29(5): 611-623.

Donovan, J., Mader, C. E., & Shinsky, J. (2006). Constructive student feedback: Online vs. Traditional course evaluations. Journal of Interactive Online Learning. 5(3), 283-295.

Gaillard, F., Mitchell, S, & Kavota, V. (2006). Students, Faculty, And Administrators’ Perception Of Students’ Evaluations Of Faculty In Higher Education Business Schools. Journal of College Teaching & Learning, 3(8): 77-90.

Hardy, N. (2003). Online ratings: fact and fiction. New Directions for Teaching and Learning, 96, 31-41. Retrieved from http://www.google.com/url?sa=t&rct=j&q=northwestern%20course%20evaluatio...

Heath, N. M., Lawyer, S. R., & Rasmussen, E, B. (2007). A comparison of web-based versus pencil-and-paper course evaluations. Teaching Psychology, 34, 259-261. Retrieved from http://www.isu.edu/psych/fac_rasmussen.shtml

Hmieleski, K. & Champagne, M. V. (2000). Plugging in to course evaluation. The Technology Source Archives, Sept./Oct. Retrieved from http://technologysource.org/article/plugging_in_to_course_evaluation/.

Johnson, T. (2002). Online student ratings: Will students respond? Paper presented at the annual meeting of the American Educational Research Association, New Orleans, 2002. Retrieved from http://www.armstrong.edu/images/institutional_research/onlinesurvey_will...

Kasiar, J. B., Schroeder, S. L. , & Holstad, S. G. (2002). Comparison of Traditional and Web-Based Course Evaluation Processes in a Required, Team-Taught Pharmacotherapy Course. American Journal of Pharmaceutical Education, 66: 268-270.

Laubsch, P. (2006). Online and in‐person evaluations: A literature review and exploratory comparison. Journal of Online Learning and Teaching, 2(2). Retrieved from http://jolt.merlot.org/Vol2_No2_Laubsch.htm

Layne B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Res Higher Education, 40:221-232.

Liegle, J. O., & McDonald, D. S. (2004, November 5). Lessons Learned From Online vs. Paper‐based Computer Information Students' Evaluation System. Information Systems Education Journal, 3(37). Retrieved from http://isedj.org/3/37/ISEDJ.3%2837%29.Liegle.pdf

Marlin, J. (1987). Student Perceptions of End-of-Course Evaluations. The Journal of Higher Education, 58(6): 704-716.

Nasser, F., & Fresko, B. (2002). Faculty Views of Student Evaluation of College Teaching. Assessment & Evaluation in Higher Education, 27(2): 187-198.

Norris, J., & Conn, C. (2005). Investigating Strategies for Increasing Student Response Rates to Online-Delivered Course Evaluations. Quarterly Review of Distance Education, 6: 13-29.

Nulty, D. (2008, June). The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education, 33(3), 301-314. Retrieved from http://public.callutheran.edu/~mondsche/misc/Nulty.pdf.

Perrett, J. 2013. Exploring graduate and undergraduate course evaluations administered on paper and online: A case study. Assessment & Evaluation in Higher Education, 38(1): 85-93.

Ravenscroft, M. & Enyeart, C. (2009). Online Student Course Evaluations: Strategies for Increasing Student Participation Rates: Custom Research Brief. Education Advisory Board, Washington D.C. Retrieved from: http://tcuespot.wikispaces.com/file/view/Online+Student+Course+Evaluatio...

Spencer, K. & Pedhazur Schmelkin, L. (2002). Student Perspectives on Teaching and its Evaluation. Assessment & Evaluation in Higher Education, 27(5): 397-409.

Thorpe, S. W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd annual Forum for the Association for Institutional Research, Toronto, Ontario, Canada.

University of British Columbia, Vancouver. (2010, April 15). Student Evaluations of Teaching: Response Rates. Retrieved from http://teacheval.ubc.ca/files/2010/05/Student-Evaluations-of-Teaching-Re...