Image Source: Water Bear Learning
This two-part feature article seeks to orient the reader on the classic instructional design model known as ADDIE. The acronym stands for Analysis-Design-Development-Implementation-Evaluation. Part 1 has expounded on ADD; this Part 2 completes the feature with IE.
Implementing a course involves training the instructor, preparing the learners, and arranging the learning space. As a best practice, the developers who designed the course usually teach it as well. However, this does not prevent developers to delegate that role to a different trainer. Furthermore, a developer trains the instructor to facilitate a course in many different areas.
Preparing the learners entails all learning materials and necessary equipment for the attendees: Powerpoint, markers, whiteboard, handouts, devices, and so on. The multimedia equipment must also be tested beforehand in the actual learning space. This spares the trainer from technical difficulties.
After setting everything up, the trainer can fully concentrate on running the course and providing a quality session or learning experience for the participants.
But it does not stop there. Later, the instructor must evaluate how the training went.
There are two ways to evaluate a course: formative and summative. The key difference is that formative is used during the program, while summative is used after the program.
A. Formative evaluation
In this approach, instructors and training specialists evaluate the learning materials as they go through each part of the ADDIE model.
Dick and Carey suggest the following models for formative evaluation:
|1. One-to-one Evaluation||Write out assessment questions beforehand – clear, consistent, objective. Assists in identifying the most and least effective areas of the instruction so it can be made better.||Conducting a basketball camp for 11-year olds. Show them a video to help them learn basic basketball rules. During the evaluation, test the video’s effectiveness by asking a series of questions about its clarity, impact, and feasibility.|
|2. Small Group Evaluation||Choose participants such that the learner sub-groups are accurately represented. Determine effectiveness about one-to-one changes. Monitor activities in group settings.||Choosing children who are both proficient and inexperienced in basketball.Assess again for clarity, impact, feasibility. Dick and Carey suggest using this attitude questionnaire: 1. Was the instruction interesting? 2. Did you understand what you were supposed to learn? 3. Were the materials directly related to the objectives 4. Were there sufficient practice exercises? 5. Did the tests really measure your knowledge of the objectives? 6. Did you receive sufficient feedback?|
|3. Field Trial||Real-time rehearsal of all instructional activities you have planned for your course.||1. Conduct it akin to the actual instructional setting. 2. Assess again for clarity, impact and feasibility of your instruction. Make changes accordingly. 3. Then be ready to deliver your instruction.|
B. Summative Evaluation
While the formative approach provides ongoing feedback, the summative approach assesses the instruction’s worth after it is completed.
According to Kirkpatrick’s model, there are four types of outcomes to evaluate: reaction, learning, behavior, and results.
- This is the easiest to do. The trainer simply writes a series of statements, which the students then indicate whether they strongly agree or disagree. Below is an example from a basketball training camp.
|Question||Strongly Agree||Disagree||Neutral||Agree||Strongly Agree|
|1.The basketball camp’s objectives were clear to me.|
|2. Drills and exercises fit with the camp’s goals.|
|3. Instructions given were clear, easy and understandable.|
|[Document more specific reactions about the instructor, activities, assessments and audio-visuals.]|
|4. Vid [b-ball rules]: helpful and informative.|
|5. Passing drill we did helped my passing abilities.|
|6. Instructor: kind and helpful.|
- Open-ended questions reveal a course’s overall strengths and weaknesses. It is also best to make the feedback or survey anonymous. This encourages the learners to truly express their attitudes without fear of judgment or retribution.
- In doing so, this helps the instructors or trainers to know what went well in the course and what can be upgraded.
This is done through a post-test which checks how well the participants achieved the learning objectives. Here are three corresponding examples on how to evaluate their KSAs.
A. Knowledge: achievement tests
|4. How many steps constitute a travelling violation?|
|a. 2||b. 3|
|c. 4||d. 5|
B. Skills: performance tests
- Let students practice public speaking skills such as vocal variety, delivery, intonation, inflection and diction. Then evaluate their performance on a rubric constructed beforehand.
C. Attitude: questionnaires
- My job gives me opportunities to develop new skills. (Rate using a five-point scale from strongly disagree to strong agree)
This refers to the transfer of KSAs from the training content to performance setting. For example:
|TRAINING CONTEXT||PERFORMANCE SETTING|
|After training call center agents on upselling customers…||… evaluate whether or not they’re doing it in actual calls.|
|After conducting the basketball camp…||… check how the basketball students apply the skills which were taught them, outside the workshop.|
This determines the overall success of the training model by measuring certan factors. For example, in a job setting, the trainer asks:
How did training affect the following?
- Job satisfaction
Another example is assessing how the basketball training affected a player’s confidence, enjoyment and win-loss rates.
By keenly applying formative and summative evaluation, talent development professionals can modify their respective courses for the learners’ benefits.
The ADDIE model involves Analysis, Design, Develop, Implement, and Evaluate. This enables us to design targeted objectives, appropriate courses and quality content for the learners. It can also be the go-to model whenever we have problems or concerns in designing effective learning programs.