Why should universities collect formative data at the course level if they aim to make evidence-based improvements in learning and teaching? Murat Sözer and Zuhal Zeybekoğlu from the Koç University Office of Learning and Teaching in Turkey explain how they use mid-semester course evaluation as a formative feedback tool to improve learning and teaching both at the course and institutional level.
Universities collect a wide range of data to improve the quality of their learning and teaching. A large-scale Likert-type survey is a common method of data collection to inform changes at the institutional, program and course levels. Such evaluation practices usually feed cyclic quality improvement processes that span over a year or more. Considering this, at the course level, the process has no immediate value for students who provide feedback for particular courses unless they retake those courses in the future. Thus, course-level data collection processes should be formative rather than summative to make any instructional improvement valuable for students taking the course at the time of the evaluation.
In 2010, Koç University (KU) in Turkey started to use mid-semester course evaluations, which is originally known as “Small Group Instructional Diagnosis”, a formative feedback tool that helps instructors make quick and small evidence-based improvements at the course level for the benefit of students taking the course. Administered by KU Office of Learning and Teaching, students are asked the following two broad open-ended questions: (1) What is going well in this course? (2) What needs to be improved?
The open-ended nature of the process facilitates feedback on different aspects of a course including course planning, materials, teaching methods, teaching behaviours, assignments, assessment and grading, and so on. Students are able to elaborate how all these different aspects of a course help or hinder their learning in a particular course. Having a voice in their learning experience, students become active, reflective, collaborative and consulted learners who think critically about their learning process, and the learning and teaching environment in general. Because of the anonymity and confidentiality of the process, students always provide their candid feedback about the course.
In general, students provide rich data about the quality of a course and the faculty member as a teacher.
Our experience has showed the process is extremely valuable for faculty members as well. Thinking and reflecting on their courses and their competencies as a teacher, faculty members find themselves intuitively engaged with professional development opportunities offered by the Office of Learning and Teaching. Especially new or junior faculty members, who need more immediate feedback to adapt their profession or a new working environment at KU, utilise mid-semester course evaluation to facilitate and shorten their adaptation process. Sometimes, they demand a consultation session from the Office of Learning and Teaching after the evaluation to determine how to follow up with students and what specific actions to take based on their feedback.
Mid-semester evaluation is easy to implement and report upon with the help of online tools and software. Over the years, the KU Office of Learning and Teaching has used three different formats including paper-based, online and mobile to perform the evaluation. The latter was launched last year to give students a more user-friendly and convenient experience. It is now a fully-fledged system that allows for the evaluation of more than six hundred courses at the same time.
The evolution of the process has lessened the reliance on human resources and increased the demand on technology. The traditional paper-based format requires the experienced Office of Learning and Teaching staff to visit the classroom, to collect written as well as oral data with the help of the pre-prepared evaluation forms and in-class whole-group discussions, and to analyse data qualitatively. The technology eliminates reliance on human resources to some extent as data collection and analysis are fully done with the help of artificial intelligence.
Online formats last for ten days and thus the participation rate can be high, while only the students who are in the classroom at the time of the paper-based evaluation can participate. From the university administration point of view, extended and structured analysis can be easily seen in the automatically prepared reports, whereas the paper-based analysis requires extensive content-analysis and considerably takes longer time. Each format has pros and cons; therefore, the institutional requirements, existing infrastructure as well as many other situational factors need to be taken into account before implementing the process in any institution.
When KU Office of Learning and Teaching took the initiative and conducted a meta-analysis of data gathered from different courses in different academic programs via mid-semester course evaluation, it became apparent that not only faculty members or students, but the institution as a whole can reap benefits from this process. The comprehensive analysis of more than 340 mid-semester course evaluation reports has resulted in a list of teaching behaviours, personal attributes, instructional strategies, assessment and evaluation methods that work well in our context. The longitudinal analysis also revealed some deeply rooted issues and concerns that necessitate immediate improvement and action by the Office of Learning and Teaching – which is affiliated with the Vice President of Academic Affairs who is responsible for the advancement of learning and teaching at the institutional level.
As a learning and teaching office, we started by prioritising the things to work on for a period of five years. Firstly, we started with the positive results to disseminate the use of best practices at KU. We brought context-specific positive indicators together as authentic workshop material and shared with faculty members as evidence-based best practices in learning and teaching at KU. The list became helpful as a guide when reviewing the faculty teaching handbook this year.
Secondly, we have undertaken some specific improvements to address two big recurring issues pertaining to many courses within five years: heavy emphasis on theory with little application and lack of enough digital support. To deal with the former, the breadth and depth of teaching workshops for the new and existing faculty members were revised with a focus on theory and practice of learning-centeredness. Course design and delivery workshops that focus on more application-based active learning strategies and techniques were developed to improve teachers’ ability to combine theory with application in their courses.
A significant improvement has been made in educational technologies to address the issue of digitalisation. Regular workshops and consultation sessions were offered to all faculty members to disseminate the use of an existing learning management system (LMS) and to increase the use of online immediate response systems to provide students with more immediate feedback about their learning in class. The number of faculty members using the LMS to create and share digital resources and to manage online discussions and other interactive tools has been increasing recently. Now, we are putting strenuous efforts on increasing the use of immediate response systems and other digital tools to leverage our improvement process that has escalated after our scholarly analysis of mid-semester course evaluation reports.
Mid-semester evaluation as a way of collecting feedback from students will always be on our agenda. Whether conducted via traditional pen-paper, online or mobile technologies, mid-semester course evaluation can be a valuable tool for universities aiming to achieve continuous evidence-based improvement in learning and teaching.
“Expert Voices” is an online platform featuring original commentary and analysis on the higher education and research sector in Europe. It offers EUA experts, members and partners the opportunity to share their expertise and perspectives in an interactive and flexible exchange on key topics in the field.
All views expressed in these articles are those of the authors and do not necessarily reflect those of EUA.