
Principle 3
Evaluate Regularly

Complete Regular Evaluation for Effectiveness​
​
Evaluation is essential. Every learning experience should be purposeful, measurable, and results-driven. The evaluation process begins by setting clear, measurable objectives that define success at the start of the design process. I use pre- and post-training surveys to measure changes in learners’ knowledge and confidence, ensuring that the experience delivered meaningful outcomes. Whenever possible, I also engage in direct conversations with participants and their supervisors to understand how new skills and knowledge are being applied in real-world contexts. These insights provide a clear picture of impact.
Principle 3 In Action
Project Overview
At The Ohio State University, I identified a critical gap in our instructional design process. While data was collected after courses were taught, our department had no direct access to that data and no structured way to engage instructors in conversations about the effectiveness of our design work. After spending a semester collaborating with instructors to build online courses, we had no formal mechanism to follow up and assess whether the instructional strategies and design decisions we implemented were successful in practice. This lack of feedback limited our ability to refine our approach and ensure that our work was truly meeting instructional goals.
My Role
I initiated and led a pilot program to address this gap. After raising the issue with my supervisor, I proposed a post-course evaluation process. I designed a set of reflective questions to guide conversations with instructors. These questions focused on assessing the success of our design collaboration, identifying areas for improvement, and understanding how students engaged with the course materials. I met with instructors shortly after they taught the courses we had designed together, creating a space for meaningful reflection and feedback.
Outcome
The pilot program proved to be both insightful and impactful. Through structured post-course conversations, we collected valuable data about how our instructional design work directly influenced course delivery and student engagement. Instructors also appreciated the chance to reflect on their course, which they rarely have the time to do. The feedback I gathered directly informed how I approached future design projects, making my work more responsive and results-driven. Following the pilot’s success, I was asked to lead a second phase in which other instructional designers conducted similar evaluations with their instructors. This expanded initiative received positive feedback across the team, reinforcing the value of integrating evaluation into the instructional design lifecycle.