Writing Performance-Based eLearning Objectives

In my previous blog, I discussed the ways in which Benjamin Bloom’s revised Taxonomy of Educational Objectives can be leveraged to define effective and meaningful objectives for an eLearning course at the outset of its development. Thanks to Bloom, we’re armed with an active vocabulary of verbs to help us articulate our learner’s desired outcomes, as well as a solid hierarchical structure of cumulative learning processes, with lower-order thinking skills undergirding more complex, higher-order ones.

So now what? We have the words, we have the layout, and we have a firm grasp of what we want our learners to be able to do at the end of their training. How do we put it all together into the writing of coherent objectives? To start with, it may be helpful to shift our focus from the learning processes Bloom outlines to the results we expect to see from our instruction. Robert Mager provided a roadmap for doing just that with his book Preparing Objectives for Programmed Instruction.

Mager and Performance-Based Learning Objectives

Like Bloom, Mager was concerned with learning that could be measured and observed in order to gauge comprehension (and therefore the success of the lesson or training). He defined these outcomes even more precisely than Bloom by breaking them down into three separate components:objective-1

  • the performance of an action by the learner;
  • the conditions under which the action is to be performed;
  • and the criteria that define whether the learner’s performance is acceptable as a measure of success.

The Three Components of Learning Success

We’re already familiar with performance verbs that demonstrate comprehension from my previous blog on Bloom. The other components of Mager’s model are a little more nuanced.

In a nutshell, Mager’s conditions set the parameters within which the learner will need to perform the action that proves their comprehension. Often these conditions are the impetus for the performance of an action, or the “givens” or limitations placed upon it. Examples might include having a learner solve a series of equations without the use of a calculator, or setting up a specific scenario for them to respond to, such as processing a merchandise return for a customer.

Mager’s criteria establish measures of success to gauge your learner’s comprehension: for example, answering nine out of ten answers correctly or completing an exercise within a specific amount of time.

Test your understanding of Mager’s performance-based learning objectives by matching the performances, conditions, and criteria below to their correct component in his formula:

Applying the Mager Methodology to eLearning

The clarity of purpose and intent provided by Mager’s performance-based learning objectives benefits both the learner and the instructional designer because it clearly defines the scope of the instruction itself. Strong learning objectives establish both what the learner will need to demonstrate, and to what degree they will need to do so. It’s important to note that Mager acknowledged that including both criteria and conditions in your objectives may not always be practical, depending on the material at hand; use them at your own discretion.

eLearning is a terrific medium for the application of Mager’s theory because it allows the developer to build the tools necessary for the learner to achieve success directly into the framework of the lesson or training. These tools can include interactive elements such as drag-and-drop or fill-in-the-blank activities, periodic skills assessments, and so forth. Mager’s theories can also be applied at every level of eLearning course design, which addresses one more nuance of objective-writing: terminal versus enabling objectives.

Terminal vs. Enabling Objectives

Often, the objectives you define at the outset of your eLearning don’t apply to every stage of the learning process. Bloom’s revised taxonomy taught us that a learner has to build upon prior knowledge to attain higher levels of learning; similarly, the objectives you set for your learner will probably build upon one another to allow the learner to achieve success.

Terminal objectives describe the learner’s expected performance by the end of the course or training, while enabling objectives define the skills, knowledge, or behaviors learners must learn in order to fulfill those terminal objectives.

It’s important to establish at the outset of your eLearning what kind of objective you’re defining. You can make this distinction by figuring out how much your learner’s performance at the end of their training will depend on prior knowledge or training. Fortunately, Mager’s formula can be applied equally effectively to both kinds of objectives— after all, you want your expectations to be clear to your user no matter where they are in the learning process!

Your Turn

Use this job aid I created, to put Mager’s theory into practice in writing your own eLearning objectives.

Now what?

This blog post from Convergence Training provides an excellent overview of Mager’s approach to writing performance-based learning objectives. If you’re interested in taking a deeper dive into his methodology, it’s more than worth your while to check out a copy of his original book, Preparing Instructional Objectives: A Critical Tool in the Development of Effective Instruction, 3rd Edition (Center for Effective Performance, 1997).

Defining eLearning Objectives with Bloom’s Revised Taxonomy

One of the chief tenets of good instructional design is to begin with the end in mind, as in: what do your learners need to be able to do by the end of your training? How will you measure their success?

These considerations, of course, are what go into the objectives that we set out at the beginning of our training to define its purpose and outcomes. However, beginning with the end in mind can be easier said than done, especially when you’re dealing with complex curricula or multiple learning paths. Writing objectives forces you to consider your audience, user roles, knowledge gaps, resources, and so forth, but how do you actually get to that point in your course design? How do you parlay these considerations into a “big-picture” endgame while keeping your eye on building effective training step-by-step?

Bloom’s Taxonomy: A Classification of Learning Behaviors

Benjamin Bloom’s Taxonomy of Educational Objectives, published in 1956, sought to  address that very conundrum. Bloom’s goal was to help classroom educators establish a terminology around learning behaviors to help them assess their students’ comprehension. In doing so, he and his team fashioned a hierarchy of educational goals, shaped like a pyramid with the most foundational, lower-order thinking skills at the bottom and higher-order skills at the top:

Bloom's Original Taxonomy

In 2001, the original Taxonomy was revised by a former student and colleague of Bloom’s to reflect new research and insights into the way people learn. The most obvious revision was to Bloom’s terminology, switching the original terms from nouns to verbs and shifting the focus from learning behaviors to learning processes:

Bloom's Revised Taxonomy

Because the levels of learning are arranged in a hierarchy from least to most complex, the structure of the Taxonomy is also cumulative, meaning that a learner can’t be expected to obtain proficiency at the higher levels without first gaining a solid grounding in the lower ones. This makes perfect sense: for example, you wouldn’t be able to analyze or evaluate information without first gaining a basic understanding of what it means.

So how does this help you define the objectives for your eLearning courses? Let’s take a look at Bloom’s in action.

Putting Bloom’s to Work

Bloom’s Revised Taxonomy is a powerful tool to have in your eLearning arsenal, thanks to its utility in helping you choose appropriate delivery methods for your desired learning outcomes. To ensure that these outcomes are met, you can develop assessments, activities, or other forms of evaluation based on the same objectives. The process of writing effective learning objectives essentially breaks down into just three steps.

Step One: Find Your Level

First, identify where the goals for your training fall on the taxonomic hierarchy. At the end of your lesson or course, do your students need to have a fairly basic understanding of the information you’ve given them, or will they need to deploy more complex thinking skills to demonstrate their grasp of the material? Here’s a tip: aim high! If you’re developing training that requires learners to accrue higher and higher levels of comprehension, make sure you don’t shortchange them by setting your learning goals too low or they won’t be able to advance from one level to the next.

Step Two: Choose Your Verbs

The levels in the Taxonomy are associated with a number of action verbs that can be useful in helping you define your learning objectives; use them to get thinking about which verbs correlate your learning goals with the content and assessment that you’ll develop to make sure those goals are met. Drag and drop the levels into place below to see some examples of their accompanying action verbs:

These verbs will not only give you an active vocabulary for the actions you expect your learners to take at the end of training, but also, crucially, to construct learning objectives at the level at which you expect those learners to perform.

Step Three: Determine Your Outcomes

Now that you know what level of learning you expect your learners to achieve and the terms you will use to define it, bring these elements together with an outcome that will allow learners to demonstrate their mastery of the subject at hand. It’s important when writing objectives to establish outcomes that are measurable and observable, as well as in line with the corresponding level of learning. For example, an assessment at the lowest level, remembering, might involve a learner being able to recite a list from memory. You can have a learner perform a simulation to prove that they can apply their knowledge to a more complex task. Analyzing, evaluating, and creating will all require more advanced activities or assessments, such as choosing the best answer from a scenario or compiling information together in a new or alternative way.

Now what?

The medium of eLearning offers myriad opportunities to leverage Bloom’s Taxonomy in establishing learning objectives; use this job aid I put together for more ideas.

There are innumerable resources available online and in print about Bloom’s Taxonomy and its use in eLearning development. For an excellent and thorough review of both the original version of the Taxonomy and its revision, you can start with to this article by The eLearning Guild; you can also check out my prior post on the subject.

Leveraging Social Styles for More Effective SME Interviews

One of the most important (but underrated) components of good instructional design is the ability to conduct a successful interview with a subject matter expert. SMEs are critical from the get-go of any project, not just for their knowledge and experience on the topic at hand but also for the insights they can offer in helping to clarify a project’s objectives, scope, assessment criteria, and so forth. As an interviewer, you need to not only gather the information you need in an efficient and timely manner, but also establish and maintain a good working relationship with your SME to ensure their support and commitment to your work together.

The Social Styles model of behavioral science, which maps personality styles in the workplace, is worth tapping into for the insights it may give you into your relationship with your SME. This approach defines employees’ behavioral patterns according to two criteria: how much they are driven by tasks versus relationships (responsiveness) and how much they ask versus tell (assertiveness). Depending on where employees fall along each axis, they are categorized according to one of four social “types”: Expressive, Amiable, Analytical, and Driver.

Drag the two sliders in the Storyline interaction below to see how adjusting levels of assertiveness and responsiveness determine the characteristics of each type:

So now that you know the basics of Social Styles, how can you apply them to your SME interviews? It breaks down into a three-step process.

Step One: Assess Your Style

A styles self-assessment is a great way to determine your own behavioral tendencies and gain some insight into the relative strengths and weaknesses that you bring to the workplace. It can also be invaluable in learning how to forge good relationships with colleagues, managers—or SMEs.

Step Two: Assess Your SME’s Style

In your kickoff meeting with your SME, use the styles grid to gauge elements of their behavior, body language, and manner of communication. Does your SME attempt to direct the meeting, or is he more passive and hesitant in responding to your questions? Does she attempt to wander off-topic into personal anecdotes and stories, or stick to “just the facts”? Identifying the Social Style of the SME you’re working with can go a long way in ensuring that you get the most from your time with him or her.

Step Three: Flex Your Style

Once you know where you and your SME fall on the styles grid, you can “flex” your own style as needed to more closely align with theirs, which can help to minimize personality clashes, establish your respective roles on a project, and put you both at ease with one another.

What can you do to tap into your SME’s style tendencies to make things go more smoothly? Here are some ideas:

  • Drivers tend to be excellent at time management and respond well to tasks; an interview with a SME who’s a Driver should be well-structured, goal-oriented, and run efficiently to ensure that he feels his time is being used judiciously and to keep him from feeling the need to direct things.
  • Analyticals generally value data and logic more than personal stories and chitchat. In an interview with an Analytical SME, keep your communication detailed, well-organized, and factual, and make sure to do more listening than speaking to avoid drowning out this less assertive type.
  • Expressives tend to be spontaneous, fun-loving, “big-picture” types. In an interview with an Expressive SME, embrace their inclination to think out loud by forgoing a bulleted agenda and engaging in some mutual exploration that engages your creative and critical thinking skills and gets you working together.
  • Amiable SMEs are likely to be more reserved in their communication as well as highly attuned to the reactions of others. Be supportive of his or her contributions in your interview and be sure to allow plenty of time for this thoughtful type to process and articulate their thoughts.

Bear in mind that you can also find yourself working with a SME who matches your own Social Style. This can be both a benefit—because you both may intuitively respond to the behaviors you share—and a drawback, because you may amplify the shortcomings of those same behaviors. In this case you can mitigate the negatives by flexing away from your own Social Style and toward that of the opposing quadrant. An Amiable interviewer should flex toward the Driver quadrant to ensure that the meeting proceeds efficiently and accomplishes its objectives; an Analytical interviewer should flex toward the Expressive quadrant to draw out their introverted SME and get them talking.

You should also be mindful that the Social Styles mapped in this model are merely guidelines to behavioral patterns, and you can only approximate your assessment of where you or a SME might fit. We all will exhibit elements from some of the other Social Styles that make a black-and-white categorization impractical. Nonetheless, hopefully this approach will get you thinking about how you can better relate to your SME, guide your interviews, and set the stage for a fulfilling and positive working relationship for both of you.

Now what?

I’ve created a job aid you can download to help you facilitate a Social Styles assessment of your SME for your next meeting. Use this as a resource to look at where your respective Social Styles fall on the grid, and what that tells you about how you can leverage that knowledge to get the most out of your work together.

For more information about leveraging Social Styles in the workplace and in SME interviews, this overview video provides a concise breakdown of the different styles. You can also refer to Robert and Dorothy Grover Bolton’s People Styles at Work: Making Bad Relationships Good and Good Relationships Better for a more detailed explanation (Ridge Associates, Inc.: New York, 1996).

CLIQ Is Growing!

After over five years of serving the training and development needs of clients around the country, CLIQ has officially added a new team member: Judith Barrett.

Judith started contracting for CLIQ over a year ago to help me meet the demands of multiple clients and projects. During that year, Judith did such excellent work and provided so much value to our clients, that I was able to bring her on full time at the start of 2016.
Judith brings a wonderful blend of an artist’s eye and a software developer’s mind to her eLearning projects. In our work together I have come to really value her insight into effective instructional design, and her facility with both Captivate and Storyline. When the opportunity arose to make her a full-time part of CLIQ, I could not pass it up.

If you get to work directly with Judith, you will surely enjoy her enthusiasm and humor as I have. Please join me in welcoming her to the team that makes, “training that CLIQs.”

Are You an Ally To Your Training Participants?

I have the good fortune of having SkyeTeam as one of my clients. The president of SkyeTeam, Morag Barrett, is an incredibly talented executive coach, facilitator, and author. One of the key teachings from her book, Cultivate. The Power of Winning Relationships, is around four types of relationships: Ally, Supporter, Rival, and Adversary.

In reading an early draft of the book, one comment that really struck me was in regard to Supporters:

In my opinion the Supporter relationship is the most insidious relationship dynamic to have. A Supporter relationship tends to be comfortable and easy, and this comfort may result in complacency.”

When I first read the word “insidious” in regard to being a Supporter (generally perceived as a “good thing”) I was really taken aback. But I came to understand Morag’s point, and how it applies to the classroom.

As a trainer, I really try to manage the tone or the environment of the class. Do the participants feel safe sharing? Does the room/setup promote interaction? Are comments and questions encouraged?

Being an Ally means providing both reinforcing feedback and corrective feedback.

Being an Ally means providing both reinforcing feedback and corrective feedback.

Consequently one thing I’ve really done over the years – and continue to do – is reinforce participation. After all, “What gets rewarded gets repeated,” right?

While reinforcing feedback is important, any good coach or manager can tell you that corrective feedback is also important. People won’t improve their performance as much if they’re only told when they do something correctly, and never when they do something incorrectly.

In the Supporter relationship, you only tell your partner what they want to hear; not in a false way, but more of just neglecting the less comfortable topics.

The Ally relationship includes a level of candor that allows its members to speak honestly and directly – helping each other see what they’re doing well – and not so well.

In the classroom, if you play the role of the Supporter, and only reinforce the positive, you’re missing out on opportunities to seize teachable moments and maximize the training experience.

If you act as an Ally, you are truly living up to your obligation to facilitate the understanding and application of the class topic.

When I do train-the-trainer classes, if we cover the topic of providing feedback, I like to say that, “Participants are never wrong; they’re just not always right.”

I find that this paradigm helps free me up to be an Ally, while also fostering that environment which is conducive to active participation. What does this sound like?

Two of my favorite tactics:

  • Focus on the correct aspect of their response, e.g. “Yes, the option you’re looking for is in the File menu. What other choice in that menu might be the one we’re looking for?” Or, “You absolutely want to block out time for planning, but how else could we carve out a chunk of time for this?”
  • Help them think through their input, e.g. “I can see why it seems like reprimanding someone in public might be effective, but what could be the downside of that behavior?” Or, “What if all our managers did that?”

So the next time you are facilitating a class or a meeting, think about your behavior, and whether you are merely supporting your participants, or being an Ally to them –  while maintaining that environment that fosters participation.

If you would like to learn more about being an Ally or establishing rules of engagement that cultivate winning relationships, I strongly recommend you order your copy of Cultivate today!

Maximize Training Impact with These Five Ideas

Maximize training impactA couple of weeks ago I was in Dallas, working for a very cool client, Ad Giants, which enables small businesses to market themselves like big businesses. We put together a pilot training program for their new consultants. Here are a few insights and reminders I had in the course of developing and delivering the training:

It’s all about behavior

Any time I get involved in a training project, one of my first questions is invariably something along the lines of, “What do you want participants to be able to DO as a result of the training?” This is an important question, but equally important is what you do about the answer. There’s a place in the process for telling and showing the desired behavior, but where the rubber hits the road is when you get them to practice. As much as I dislike doing role plays as a participant, the fact of the matter is they are an excellent way to prepare for applying soft skills on the job.

Concepts are important too

As much as I emphasize behavior in my design and delivery, there are at least three factors worth considering:

  • Learning styles – People have preferences and strengths, and as valuable as role playing and practice can be, some folks really get a whole lot from ideas and concepts. You want to engage your learners to the best of your ability.
  • Buy in – While it’s great to make sure people can perform, andragogy teaches us that adult learners need a reason to learn. And on that note, Keller’s ARCS model (see my past blog) teaches us that training should be relevant in order to motivate participants to learn.
  • Behavioral complexity – One of my all-time favorite quotes, from Ralph Waldo Emerson, sums it up nicely:

“As to methods there may be a million and then some, but principles are few.
The man who grasps principles can successfully select his own methods.
The man who tries methods, ignoring principles, is sure to have trouble.”

Stakeholder involvement really helps

I had the good fortune of having the President of the company as my primary stakeholder. What was even better was how invested he was in the outcome and how much he values training. I have not been as fortunate in other projects, and I can tell you that you work a whole lot harder for lower quality when the people that need to be engaged, are not.

There’s no resource like THE source

A big part of our strategy was built upon the CEO sharing his expertise and modeling the behaviors that participants would then role play. I was fortunate to have someone who is just that great at what he does, and also does an excellent job of communicating it. The participants ate it up – they learned a lot and it also helped them bond with their new company.

There’s no world like the REAL world

One thing that really enhanced the quality of the training was the use of client case studies. Ad Giants has a YouTube channel that includes video testimonials from clients. What a treasure trove — to have all this great content ready to be leveraged for training purposes. We took one of the videos, and used it as a case throughout the first hour of the class, helping participants apply concepts to an actual client. The video added instructional variety, and the case provided a dose of reality.

So there you have it, a few insights from a great project. Hopefully they’ve given you some food for thought for maximizing the impact of your own training projects.

Templatized e-learning: Myth or Reality?

A little while back I worked on a project that caused me to deeply reflect on the instructional design process and the role of lay instructional designers. My assignment was to create a PowerPoint template that would facilitate the development of simple but effective e-learning modules by subject matter experts.


The project was great because we were leveraging a relatively simple and well known tool, along with a standard look and format, to allow SMEs to share their knowledge, grow their skills and make effective use of down time.

Part of the challenge was the instructional design expertise of the SMEs. One of the ways we managed this is through the design of the template. It includes elements such as interactive questions with feedback, introductions and summaries, overviews, detailed steps, and branched scenarios.

By including these elements we are showing the SMEs some options as well as what goes into effective e-learning, while giving them a sense of how things can flow.

Of course, including these and other elements has a concomitant effect on cognitive overload for the task; there are a lot of slides to look through. So there is a balance of giving enough content to provide context, but not so much content that the design and structure are obscured.

One way we managed this is by availing ourselves of PowerPoint features such as hidden slides and sections (see my previous blog on this feature).

Another way we managed this as well as the original concern about SMEs and instructional design expertise is we provided training that included some ID basics as well as the care and feeding of the template.

This brings me back to my original question: is there such a thing as templatized e-learning? Or perhaps I should modify the question by asking, “Is there such a thing as an effective e-learning template?”

My insights are:

  • An effective template, like effective training, is focused on objectives and tailored to the audience.
  • It’s still good practice to tell people what you’re going to tell them, tell them, and tell them what you told them. I know this is old but the reality is people miss things and/or fail to retain them when only exposed once.
  • People learn best when they have a real reason to learn, and are solving authentic problems
  • People like to feel effective (see my previous blog on motivation in learning) – they like to feel like they have accomplished something.

All these things can be built into a template. Of course the devil is in the details, but when you revisit the audience and the client’s goals, you’re off to a pretty good start.

Group Those PowerPoint Slides!

Have you ever worked with a larger PowerPoint presentation, and wished for a way to organize your slides into groups? PowerPoint 2010 introduced a long-awaited feature: sections.

One of the features I have appreciated about developing e-learning projects in Adobe Captivate is the ability to group the slides so that I can easily organize, navigate among, and manipulate them. By chunking the slides into groups such as “introduction,” “demonstration,” “simulation,” and “assessment,” I could easily see the big picture and work more efficiently.

PowerPoint 2010 introduced this functionality as “sections.” Using sections you can easily organize your slides into meaningful groups.

Using the Filmstrip in Normal View, or in Slide Sorter view, simply right-click the slide that would be the first one in the section. A shortcut menu appears, and one choice is “Add Section.” Clicking that option adds a divider that says “Untitled Section.”

Right clicking this new divider brings up a different shortcut menu with options such as renaming and removing the section. Clicking the caret next to the section name allows you to expand/collapse the section – giving you control over which slides you see, and facilitating navigation and organization.

For more information about this feature, check out Microsoft’s help page for Sections.

Targeting the Right Skill Level

Have you ever struggled with determining the appropriate skill level of a training session or module? In the course of writing training objectives, sometimes you can get stuck on how much you can/should accomplish given the audience, available resources, and the overall goal.

This article briefly describes a model that can help you determine what it is you hope to accomplish with your training.

Bloom’s Revised Taxonomy

It was way back in 1956 that Benjamin Bloom and his colleagues developed a Taxonomy of Educational Objectives for the Cognitive Domain. The Taxonomy described six levels of mental mastery of a topic.

In spite of being well over 50 years old, the original Taxonomy is still widely in use in education and training. However, the Taxonomy was revised in the 1990’s by former students of Bloom’s, and was published in 2001. A key difference between the original and revised taxonomies is the shift from nouns to verbs to label the levels of mastery.

The verbs are organized sequentially, from Lower-Order to Higher-Order Thinking Skills. You can mouse over the terms to learn more about each level.

In Action

I worked with a client to develop a year-long train-the-trainer program for internal subject matter experts. The goal of the program was to enable SMEs to develop and deliver training that was needed within their respective departments from throughout the Operations division of the organization.

Ultimately, since the program objective was for participants to develop and deliver their own training, we were designing for the highest-level thinking skill – creating. However, based on the hierarchical nature of the six skill levels, and the gap between the participants’ current skillset and desired performance, the program included objectives at all the levels. Sessions included objectives such as:

  • Remembering – Describe the procedure for requesting instructional design support.
  • Understanding – Explain how different learning modalities can be addressed through variety in instructional design
  • Applying – Use the Criterion Referenced Instruction Model to construct behavior-based objectives
  • Analyzing – Identify different types of “difficult classroom behaviors” and develop strategies for responding to them
  • Evaluating – Use the Training Feedback Tool to provide actionable feedback on peer presentations
  • Creating – Develop a 15-minute training module which includes visual aids and handouts

Your Turn

If you would like to put the Taxonomy into action on a current or forthcoming training project, try out this simple Skill Level Assessment I put together. The template helps you think through the training objectives in light of the Taxonomy.

Now What?

You may have noticed how helpful the Taxonomy can be in identifying and constructing objectives. There are many resources on the Internet that tie performance verbs to the Taxonomy to facilitate designing effective objectives; this one from the University of West Florida is good.

This article from the University of Texas provides some more background on the original and revised taxonomies, and includes a few verbs.

This post is one in a series that highlights different instructional and performance technology theories — concisely explaining them in a way that can help you put them to work immediately or just enhance your credibility when speaking with colleagues or clients. 

Enhancing Learning Retention

Have you ever struggled with having learners remember and apply what they learned in training? There are of course a wide array of possible reasons for training not “sticking”, including learner motivation, quality of delivery, learner readiness, or that perhaps training wasn’t the right solution to the problem.  Another possibility is that the instructional design is at fault.

This article briefly describes a model that can help you design training that is built to facilitate retention and transfer.

Gagne’s Nine Events of Instruction

Robert Gagne was an educational psychologist who made many significant contributions to the world of instructional technology; one of the most pragmatic ones is his “Nine Events.” I call it “pragmatic” because whether you have a degree in instructional design or got talked into training because you know a lot about a certain topic, you can easily understand and apply the Events to create effective training:

  1. Gain attention – Just as I discussed in my post on the ARCS Model, you can’t teach them if you don’t have their attention. Get them interested in you and what you have to say.
  2. Inform learners of objectives – Tell them what you’re going to tell them. A classic mistake trainers make is to think that because they tell learners something once, they’ll remember it. By sharing objectives, you are facilitating retention of what comes later.
  3. Stimulate recall of prior learning – Another mistake trainers make is to jump right into a topic. This can be analogous to doing a sprint without first warming up. Help learners use the “muscles” they’re going to be using by connecting what they already know with what they don’t know. This is also known as creating “semantic scaffolding” – building new knowledge based on existing knowledge.
  4. Present content – Now they’re primed for what you have to share, so go ahead and share it. Just remember that this doesn’t mean you get to do the “data dump.” Engage different learning modalities by telling, showing, and using hands-on as appropriate.
  5. Provide “learning guidance” – This is where you help learners “encode” (remember and attach meaning to) what you’ve shared. This means doing things like providing examples and non-examples, analogies, anecdotes, illustrations, diagrams, and so on.
  6. Elicit performance – This is training, right? So make sure they’re doing something. Have them practice what they’re learning, whether it’s filling out a form, doing a behavioral interview, or closing the sale. It all comes down to performance.
  7. Provide feedback – An early mentor of mine used to be found of saying that “Practice does not make perfect. Practice makes permanent.” She would elaborate that “Perfect practice makes perfect.” And the way you get perfect practice is by providing feedback. Remember that with feedback, telling them what they’re doing right is as important as what they’re doing wrong.
  8. Assess performance – If you want to make sure they’re learning, then test them. Testing can be formal (written/performance) or more informal (verbally quiz them). Just bear in mind that the more rigorous the assessment, the better they will retain.
  9. Enhance retention & transfer – As you wrap up, you can employ simple strategies like, “Tell them what you told them,” i.e. summarize; and/or you can have them do things like describe how they will apply this back on the job, write letters to themselves that you pop in the mail a month later, or fill out “performance contracts” that get signed by their supervisors.

In Action

As part of a self-paced course on performing various customer service procedures, I developed a module that taught the process for helping a customer return an item. Here is a quick look at what the module looks like from a “Nine Events” perspective:

  1. Gain attention – The module started by asking rhetorically if the learner has had to help a customer return an item before. Did they know what to do? Were there any problems? Did the process go as well as it could have?
  2. Inform learners of objectives – The objectives displayed. Nothing fancy required in terms of design.
  3. Stimulate recall of prior learning – The learner was asked if they had been on the other side of the desk – if they had ever returned an item. How were they feeling? What did they want from the interaction? How did it go? Why?
  4. Present content – The module taught the process from both customer service (“Say this…ask that…”) and technological (“Click here…type this…) standpoints.
  5. Provide “learning guidance” – A screen displayed with tips that included a downloadable job aid with the technological steps.
  6. Elicit performance – A scenario unfolded where the learner was forced to make choices as a virtual customer returned an item.
  7. Provide feedback – At each choice, the learner received feedback about the consequences and efficacy of the choice.
  8. Assess performance – A similar scenario unfolded, but this time responses were scored. We elected to also provide feedback similar to the practice activity. The assessment could be retaken immediately until the learner passed.
  9. Enhance retention & transfer – A summary screen displayed with the key points and another link to the job aid.

Your Turn

If you would like to apply the Nine Events of Instruction to a current or planned course, try my Nine Events of Instruction Planning Guide for your analysis/design.

Now What?

There really is a plethora of information out there about this model, because it is just that useful. I have been a fan of Don Clark’s site for many years, and he has a nice synopsis of the Nine Events.

Please contact me if you would like to discuss the ideas in this article, or how I can help your organization’s training CLIQ.

This post is one in a series that highlights different instructional and performance technology theories — concisely explaining them in a way that can help you put them to work immediately or just enhance your credibility when speaking with colleagues or clients.