Faculty Affairs and the Standing Committee for Assessment of Teaching Effectiveness have developed the following resources to guide Student Feedback Reviewers.
This webpage describes the new process for two Student Feedback Reviewers as they evaluate student feedback for promotion.
This webpage describes what Student Feedback Reviewers are expected to do to evaluate a candidate's student feedback. It also clarifies that the report is not a summary of student feedback.
This webpage provides guidance for optionally using AI to summarize qualitative student feedback. AI must not be used to generate the final, 750-word evaluative statement about the candidate’s teaching effectiveness.
The guidance below depends on having sufficient student responses (a high enough response rate) to be confident that the feedback is representative of the students enrolled in the course. When response rates are low, respondents' views are less likely to represent other students' views. Most instructors have much lower response rates with the SEEQ, which is an expected outcome of any new feedback instrument.
If you have questions about any of the information below, please feel free to reach out to our faculty consultants or send an email to site@psu.edu.
Members of promotion committees will want to compare student feedback with other sources of teaching evidence such as those below. This is particularly important when response rates are low. When only a few students respond, we cannot have confidence that the responses are representative of students in the course.
The information above draws on a Penn State University Faculty Senate Report (Appendix R, March 14, 2017), which was adapted from Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees, Studies in Educational Evaluation, 54 (2017), pp. 94–106 by A. Linse.