Tomorrow's Teaching and Learning
----------------------------------------------- 1,672
words --------------------------------------
Answers to Faculty Concerns About Online Versus In-class
Administration of Student Ratings of Instruction (SRI)
Many faculty members express reservations about online
SRIs. To increase their motivation and cooperation, it is essential to
understand the underlying reasons for their resistance and to provide them with
good answers to counter their reservations and diffuse their concerns. The
following are research-based answers to four major faculty concerns about
online SRIs.
Concern 1: The online method leads to a lower response
rate [which may have some negative consequences for faculty].
Participation in online ratings is voluntary and requires
student motivation to invest time and effort in completing the forms. Faculty
are concerned that these conditions will produce a lower response rate that may
reduce the reliability and validity of the ratings, and which may have some
negative consequences for them.
The majority of studies on this issue found that indeed,
online ratings produce a lower response rate than in-class ratings (Avery,
Bryant, Mathios, Kang, & Bell, 2006; Benton, Webster, Gross, & Pallett,
2010 ; IDEA, 2011; Nulti, 2008). Explanations are that in-class surveys are
based on a captive audience, and moreover, students in class are encouraged to
participate by the mere presence of the instructor, his/her expressed pressure
to respond, and peer pressure. In contrast, in online ratings, students lack
motivation or compulsion to complete the forms or they may experience
inconvenience and technical problems (Sorenson & Johnson, 2003).
Concern 2: Dissatisfied/less successful students
participate in the online method at a higher rate than other students.
Faculty are concerned that students who are unsuccessful,
dissatisfied, or disengaged may be particularly motivated to participate in
online ratings in order to rate their teachers low, blaming them for their own
failure, disengagement, or dissatisfaction. Consequently, students with low
opinions about the instructor will participate in online ratings at a
substantially higher rate than more satisfied students.
If this concern is correct, then the majority of
respondents in online surveys will rate the instructor and the course low, and
consequently, the rating distribution will be skewed towards the lower end of
the rating scale. However, there is robust research evidence to the contrary
(for both methods?on paper and online), that is, the distribution of student
ratings on the Overall Teaching item is strongly skewed towards the higher end
of the scale.
Online score distributions have the same shape as the paper
distributions?a long tail at the low end of the scale and a peak at the high
end. In other words, unhappy students do not appear to be more likely to
complete the online ratings than they were to complete paper ratings (Linse,
2012).
The strong evidence that the majority of instructors are
rated above the mean of the rating scale indicates that the majority of
participants in online ratings are the more satisfied students, refuting
faculty concerns about a negative response bias. Indeed, substantial research
evidence shows that the better students, those with higher cumulative GPA or
higher SAT scores, are more likely to complete online SRI forms than the less
good/successful students (Adams & Umbach, 2012 ; Avery et al., 2006; Layne,
DeCristoforo, & McGinty, 1999; Porter & Umbach, 2006; Sorenson &
Reiner, 2003).
The author examined this issue at her university for all
undergraduate courses in two large schools: Engineering and Humanities (Hativa,
Many, & Dayagi, 2010). The number of participating courses was 110 and 230,
respectively, for the two schools. At the beginning of the semester, all
students in each of the schools were sorted into four GPA levels. The lowest
20% of GPAs in a school formed the Poor group whereas the highest 20%, the
Excellent group. The two intermediate GPA levels formed, respectively, the Fair
and Good groups, with 30% of the students in each. Results show that the rate
of response for the Poor, Fair, Good and Excellent groups were respectively for
the school of humanities: 35, 43, 43, and 50, and for the school of
engineering: 48, 60, 66 and 72.
In sum, this faculty concern is refuted and even
reversed?the higher the GPA, the larger the response rate in the online method
so that the least successful students seem to participate in online ratings at
a lower rate than better students.
Concern 3: The lower response rate (as in Concern 1) and
the higher participation rate of dissatisfied students in online administration
(as in Concern 2) will result in lower instructor ratings, as compared with
in-class administration.
Faculty members are concerned that if the response rate
is low (e.g., less than 40% as happens frequently in online ratings), the
majority of respondents may be students with a low opinion of the course and
the teacher, lowering the ?true? mean rating of the instructor.
Research findings on differences in average rating scores
between the two methods of survey delivery are inconsistent. Several studies
found no significant differences (Avery et al., 2006; Benton et al., 2010;
IDEA, 2011; Linse, 2010; Venette, Sellnow, & McIntyre, 2010). Other studies
found that ratings were consistently lower in online than on paper, but that
the size of the difference was either small and not statistically significant
(Kulik, 2005) or large and statistically significant (Chang, 2004).
The conflicting findings among the different studies can
be explained by differences in the size of the population examined in these
studies (from dozens to several thousand courses), the different instruments
used (some of them may be of lower quality), and the different research
methods. Nonetheless, the main cause of variance between findings in the
different studies is probably whether participation in SRI is mandatory or
selective. If not all courses participate in the rating procedure rather only
those selected by the department or self-selected by the instructor, the
courses selected and their mean ratings may not be representative of the full
course population and should not be used as a valid measure for comparison.
The author examined this issue in two studies that
compared mean instructor ratings in paper- and online SRI administration based
on her university data, with mandatory course participation. The results of
both studies are presented graphically and reveal a strong decrease in annual
mean and median ratings from paper to online administration. The lower online
ratings cannot be explained by a negative response bias?by higher participation
rate of dissatisfied students, because as shown above, many more good students
participate in online ratings than poor students. A reasonable explanation is
that online ratings are more sincere, honest, and free of teacher influence and
social desirability bias than in-class ratings.
The main implication is that comparisons of
course/teacher ratings can take place only within the same method of
measurement?either on paper or online. In no way should ratings in both methods
be compared. The best way to avoid improper comparisons is to use a single
method of rating throughout all courses in an institution, or at least in a
particular school or department.
Concern 4: The lower response rate and the higher
participation rate of dissatisfied students in online administration will
result in fewer and mostly negative written comments
Faculty members are concerned that because the majority
of expected respondents are dissatisfied students, the majority of written
comments will be negative (Sorenson & Reiner, 2003). An additional concern
is that because of the smaller rate of respondents in online surveys, the total
number of written comments will be significantly reduced compared to in-class
ratings. The fewer the comments written by students, the lower the quality of
feedback received by teachers as a resource for improvement.
There is a consensus among researchers that although mean
online response rates are lower than in paper administration, more respondents
write comments online than on paper. Johnson (2003) found that while 63% of the
online rating forms included written student comments, only less than 10% of
in-class forms included such comments. Altogether, the overall number of online
comments appears to be larger than in the paper survey.
In support:
On average, classes evaluated online had more than five
times as much written commentary as the classes evaluated on paper, despite the
slightly lower overall response rates for the classes evaluated online (Hardy,
2003, p. 35).
In addition, comments written online were found to be
longer, to present more information, and to pose fewer socially desirable
responses than in the paper method (Alhija & Fresko, 2009). Altogether, the
larger number of written comments and their increased length and detail in the
online method, provide instructors with more beneficial information and thus
the quality of online written responses is better than that of in-class survey
comments.
The following are four possible explanations for the
larger number of online comments and for their better quality:
? No time constraints: During an online response session,
students are not constrained by time and can write as many comments and at any
length as they wish.
? Preference for typing over handwriting: Students seem
to prefer typing (in online ratings) to handwriting comments.
? Increased confidentiality: Some students are concerned
that the instructor will identify their handwriting if the comments are written
on paper.
? Prevention of instructor influence: Students feel more
secure and free to write the honest truth and candid responses online.
Regarding the favorability of the comments, students were
found to submit positive, negative, and mixed written comments in both methods
of rating delivery, with no predominance of negative comments in online ratings
(Hardy, 2003). Indeed, for low-rated teachers?those perceived by students as
poor teachers?written comments appear to be predominantly negative. In
contrast, high-rated teachers receive only few negative comments and
predominantly positive comments.
In sum, faculty beliefs about written comments are
refuted?students write online more comments of better quality that are not
mostly negative but rather represent the general quality of the instructor as
perceived by students.
References
Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse
and online student evaluations of teaching: Understanding the influence of
salience, fatigue, and academic environments. Research in Higher Education, 53
, 576-591.
Alhija, F. N. A., & Fresko, B. (2009). Student
evaluation of instruction: What can be learned from students' written comments?
Studies in Educational Evaluation, 35 (1), 37-44.
Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., &
Bell, D. (2006). Electronic course evaluations: Does an online delivery system
influence student evaluations? The Journal of Economic Education, 37 (1),
21-37.
Benton, S. L., Webster, R., Gross, A. B., & Pallett,
W. H. (2010). An analysis of IDEA student ratings of instruction using paper versus
online survey methods, 2002-2008 data IDEA Technical Report no. 16 : The IDEA
Center.
Chang, T. S. (2004). The results of student ratings:
Paper vs. online. Journal of Taiwan Normal University, 49 (1), 171-186.
Hardy, N. (2003). Online ratings: Fact and fiction. In D.
L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction.
New Directions for Teaching and Learning (Vol. 96, pp. 31-38). San Francisco:
Jossey-Bass.
Hativa, N., Many, A., & Dayagi, R. (2010). The whys
and wherefores of teacher evaluation by their students. [Hebrew]. Al Hagova, 9
, 30-37.
IDEA. (2011). Paper versus online survey delivery IDEA
Research Notes No. 4 : The IDEA Center.
Johnson, T. D. (2003). Online student ratings: Will
students respond? In D. L. Sorenson & T. D. Johnson (Eds.), Online student
ratings of instruction. New Directions for Teaching and Learning (Vol. 96, pp.
49-59). San Francisco: Jossey-Bass.
Kulik, J. A. (2005). Online collection of student
evaluations of teaching Retrieved April 2012, from http://www.umich.edu/~eande/tq/OnLineTQExp.pdf
Layne, B. H., DeCristoforo, J. R., & McGinty, D.
(1999). Electronic versus traditional student ratings of instruction. Research
in Higher Education, 40 (2), 221-232.
Linse, A. R. (2010, Feb. 22nd). [Building in-house online
course eval system]. Professional and Organizational Development (POD) Network
in Higher Education, Listserv commentary.
Linse, A. R. (2012, April 27th). [Early release of the
final course grade for students who have completed the SRI form for that
course]. Professional and Organizational Development (POD) Network in Higher
Education, Listserv commentary.
Nulti, D. D. (2008). The adequacy of response rates to
online and paper surveys: What can be done? Assessment and Evaluation in Higher
Education, 33 , 301-314.
Porter, R. S., & Umbach, P. D. (2006). Student survey
response rates across institutions: Why do they vary? Research in Higher
education, 47 (2), 229-247.
Sorenson, D. L., & Johnson, T. D. (Eds.). (2003).
Online student ratings of instruction. New Directions for Teaching and Learning
(Vol. 96). San Francisco: Jossey-Bass.
Sorenson, D. L., & Reiner, C. (2003). Charting the
uncharted seas of online student ratings of instruction. In D. L. Sorenson
& T. D. Johnson (Eds.), Online student ratings of instruction. New
Directions for Teaching and Learning (Vol. 96, pp. 1-24). San Francisco:
Jossey-Bass.
Venette, S., Sellnow, D., & McIntyre, K. (2010).
Charting new territory: Assessing the online frontier of student ratings of
instruction. Assessment & Evaluation in Higher Education, 35 (1), 97-111.
CONTACT
------------------------------------------------------------------------------------------------------------------
Nira Hativa, Ph.D.
Prof. emeritus of Teaching in Higher Education Former
chair of the Department for Curriculum and Instruction, School of Education
Former director of the Center for the Advancement of Teaching Former director
of the online system for student ratings of instruction Tel Aviv University
No comments:
Post a Comment