To have great poets, there must be great audiences, too.Walt Whitman
That often quoted bit of wisdom seems a good guide to current conversations about evaluating teaching in higher education. When extrapolated beyond poetry, it asserts that excellence in any domain requires feedback from observers who can discern quality and communicate their views effectively. In the absence of others who appreciate the finer points of a body of work, performance will develop only slowly in the direction of high quality or perhaps not reach a masterful level at all.
In higher education, feedback on instructional design and delivery has not yet reached a level of sophistication sufficient to drive and sustain widespread excellence in teaching and learning. The empirical consensus is that “evidence-based” instruction involves accountable out-of-class preparation by students combined with a mix of active learning and brief didactic synthesis during face-too-face time. This approach can yield understanding that is both durable and generalizable in ways not typically seen with traditional presentation of content. The method is especially valuable when instructors pay close attention to students’ understanding and refine instructional materials and activities to generate improvement in student performance over successive offerings of a course.
The feedback that instructors receive, however, rarely makes contact with that level of intentionality in design. Typically peers visit classes, but the majority of feedback is on the performance of the instructor during class time. Sometimes the observing peer simply evaluates using the criteria for good presentation, though more observers are now asked to comment on active learning. Some peer observers also look at assignments and comment on the alignment of learning activities and assessment, perhaps while reflecting on how the course fits the program curriculum. It is still rare for peers to examine and provide feedback on how successfully students come to understand and perform the course goals, and even less often are there regular conversations about the trajectory of student learning across multiple offerings.
To be fair, there are relatively few instructors who represent their courses in a form that would make such observations possible or relatively efficient. For at least 15 years there have been public repositories of course portfolios that provide representations of iterative inquiry into student understanding, but the audience for that work has been limited. Most higher education faculty members give detailed reading to each others’ scholarship of discovery, but there are not many faculty members who are accustomed to reading representations of inquiry into learning. Accordingly, there is not yet a body of peers who can provide discerning feedback on growth in successfully designing courses that help generate quality understanding.
Our current Teagle-funded project is aimed at demonstrating how such a community of authors and readers can be developed among humanities faculty members. The “Collaborative Humanities Redesign Project” (CHRP) began by having faculty members explore instructional designs that are likely to enhance both the depth of student understanding and the success of a broader range of students. Those innovations in design and the resulting evidence of student understanding are now being formed into course portfolios that will be read by faculty members in CHRP and by volunteers from related fields outside of the project. The readers are learning how to be a useful and constructive audience for quality instructional design, for representation and evaluation of student understanding, and for successive enhancements of teaching methods based on reflection and prior feedback. It is our hope that after three years CHRP will offer an existence proof of the idea that a community of scholars can become sufficient audience to help develop and sustain a body of excellent instruction and learning.