Why do we need assessments as part of the learning experience?

If you ask every learning designer you know to answer that question, you will get a lot of different answers:

  • Our stakeholders want proof: quantified results and ties to business outcomes
  • Evaluation (assessment) is the last step in ADDIE (it’s supposed to be the first step, too)
  • We should be supporting higher-order thinking and problem-solving, not just rote knowledge
  • We need it to validate our efforts
  • Evaluation should tell designers what they did right, and where they went wrong
  • It’s for the LMS
  • It’s feedback so the learner will know how well his or her performance compares to some standard
  • The result is part of our formal competency measurement during a year-end review
  • So we have some idea how a help-desk worker will empathize with a frustrated user

What’s your answer? Is measurement part of a strategy for learning, or is it merely an end-of-the-process step in an instructional design model?

Organizing assessment by levels of thinking

In several articles in Learning Solutions, Mike Dickinson addresses a number of ways to deal with this challenge by writing multiple-choice questions that assess for the level of thinking required to answer. To summarize, much of what we do in evaluation measures learning according to Benjamin Bloom’s six levels of cognitive behavior; from knowledge at the lowest level to evaluation at the top.

Mike also discusses evaluation according to J.P. Guilford’s analysis of cognition as a matter of convergent vs divergent thinking. For test questions, what we often test is convergent thinking, where there is a preexisting correct answer. With divergent thinking, there is no preexisting correct answer—the test must consist of the learner doing something with the existing knowledge to create new knowledge or to compose a new solution. Guilford’s divergent thinking seems most closely related to Bloom’s Evaluation and Synthesis levels, and convergent to the first four levels. But when the learning objectives call for the higher levels, we need the tools required to assess the learner at those levels. The challenge is writing those higher-order questions for an assessment.

And there are further details to consider. In the Learning Guild’s Research Spotlight: Writing Assessments to Validate the Impact of Learning., A.D. Dettrick, Jane Bozarth, Sharon Vipond, and Marc Rosenberg, along with Mike Dickinson, provide industry perspectives, practical guidelines and resources, discussion of where we in L&D are going with assessments, a helpful glossary, and a number of templates to help you with assessments.

Creating better assessments

In their session September 30 at the The Measurement & Evaluation Online Conference, “How Good Intentions Create Bad Assessments”, Sean Hickey and Cara North will explore common test-writing pitfalls that make it possible for smart test-takers to excel without learning the content. They will give you the best practices for writing multiple choice questions, such as including plausible distractions and avoiding grammatical cues and distractor length that “give away” the correct answer.

Register now for this online conference and learn new strategies for enhancing your L&D projects! If you are interested in attending this online event but are unable to attend on either September 30 or October 1, register anyway and you’ll receive access to the recorded sessions and handouts after the event.

You can also get a Learning Guild Online Conference Subscription to access this and all online conferences for the next year, plus much more. Attend the Online Conference and take your assessments to the next level.