Feedback

Quote

We don’t do anything without feedback. Our entire nervous system relies on it to help us determine appropriate responses.

  • In some situations we ask for feedback  – “Does this dress make me look fat?”
  • At other times feedback is unsolicited – “That dress makes you look fat.”

In either case the information is useful in developing a response reaction.

But what about elearning? How do we get feedback on the courses we’ve developed or purchased, and implemented?

If you’re like most of us you’re just glad to get the courses posted on the LMS and assigned to the target audience.

The passing grade and completion checkmark are sufficient feedback for determining success for both the student and the course.

This passive approach to evaluation hurts ALL stakeholders.

By not taking evaluation seriously we allow others, sometimes less qualified, to determine both the efficacy of our efforts AND set the groundrules by which we develop web based training solutions.

Consider that 70% of all corporate online training is compliance-based. This means the company MUST develop or implement training to address regulations.

The natural impulse when we are TOLD to do something is to:

  • resist
  • look for a way not to have to comply
  • identify the least expensive means to comply.

For most organizations the third option defines elearning. It’s the less expensive option versus classroom instruction.

Let that concept sink if for a while. Consider COST as the original ROI of our educational efforts. We’re cheaper than a classroom.

But soon less expensive was no longer enough. Courses had to be developed faster. So application developers began to offer “rapid development tools”. This streamlined approach was based on employing the sound pedagogical principles embodied by the Microsoft PowerPoint application to allow educators to construct and deliver less expensive courses in a compact timeframe. Educational Nirvana had been achieved.

Has fast and cheap helped achieve the goals of education or simply met the minimal requirements for proof of compliance?

Has the checkmark in the LMS report become the new standard by which we measure success?

If you share my values as an educator, when you cannot be happy with being perceives as “less expensive than a classroom; constructed with a presentation tool”.

How did we get here?

It began with feedback. We passively allowed the checkmark in the LMS to become the defacto feedback standard by which educational success was measures. The checkmark became the goal. Comprehension was an afterthought – and certainly not required by the audit report.

Puns aside, I believe it is time to make a drastic course correction. We must make COMPREHENSION the standard by which we measure our attempts at educating our students and employees. All invested parties need to play a role in this effort. That means Chief Learning Officers, Compliance Managers, Application Developers, Training Managers, Instructional Designers, Students and Budget Directors.

“How quickly can they get through it?” is no longer acceptable.

Consider this instead: “Will they understand the concepts and modify their behaviors while performing their job?”

Faux Compliance

Recently an Instructional Designer colleague mentioned that he hoped his audience didn’t get upset with him because he was going to “clear their previous answers” AFTER they failed the test AND BEFORE they retook it.

His primary concern was the learner’s reaction, not their level of understanding.

When an audit results in a fine levied on a company, could “soft” assessments be the cause? What responsibility does an Organization, Trainer, LOD Manager AND Trainee have in assuring training adequately assesses learner understanding?

Testing to competency should be an acceptable goal in compliance training. Failing to achieve a level of proficiency OR understanding can lead to some type of future loss.

When assessments are “too difficult” then the learner gets frustrated.

When learners take “too much time” in their training employers get upset with lost man hours.

As a result developers create distractors that are more effective at helping the student identify the correct answer rather than inviting thoughtful consideration. This pleases BOTH the employee AND the business owner. It accomplishes the goal of “faux” compliance.

When an Instructional Designer consciously undermines his own value, and the impact of the course, by “dumbing down” an exam, he performs a disservice to himself, his company, his audience AND his fellow designers.

It is ALWAYS better to allow an individual to fail – for the company AND the individual. The company gets to identify who needs help with concepts and the employee gets the help he needs to better understand concepts or processes. Anything less is like putting a band-aid over an irregularly shaped mole.

I think it might be time to retire the world “compliance”. It’s too soft. Compliance infers surrender.

Instead, how about if we decided to refer to it as partnership, brotherhood, cooperative or, even better, regulatory ownership training. These words indicate a shared personal interest in a common goal rather than concession, submission or consent.

We don’t need to put more teeth into regulatory training; we just need to do more than merely comply.

Correlation

Correlation is an interesting concept when applied to training. In the bookBig Data the authors state that, if action “A” and action “B” take place together – then the existence of “B” is a reliable predictor of “A”, even if “A” cannot be observed or even measured.

An example from the book: Wal-Mart analyzed customer purchases in concert with a number of other factors, one of which was weather. They found the sale of flashlights increased when violent weather was predicted. There was a similar increase in the sale of Pop Tarts. So the next time violent storms were forecast Wal-Mart located the Pop Tarts next to the flashlights, and experienced an increased sales of both items.

So what does this have to do with eLearning? There may be unexpected correlations between training activities and student success that indicate the relevancy of certain types of educational activities. These might be positive as well as negative, and might “fly-in-the-face” of current theories.

Certainly we expect the design of the course to impact the initial ATTRACTION of the student to the course. But what about factors that we might overlook, and not consider, that might influence or predict success? Consider the launch date.

How many companies think about the launch date of a course as an indicator of the success or effectiveness of an eLearning initiative? I would be willing to bet that it falls far behind other considerations such as the availability of funding, resource scheduling, audit probabilities and administrative directives. I wonder if it is even considered at all.

Consider this fact:

Most individuals/families take vacation from late July through the month of August.

Suppose funding becomes available on June 1st to update your annual Sexual Harassment web-based training course. Based on an internal directive to meet compliance, all company employees must complete this training by the end of the third quarter. A review of internal development resources shows that the updates can comfortably be completed, with reviews, to allow a course launch date of July 15th, giving employees a 45 day window to complete the training. Everyone is pleased with this projected schedule – from the administration through the developers and Training Managers.

The Sexual Harassment web-based training course is launched on schedule and training is completed before the end of August, keeping the company compliant, yet during the last quarter of the year harassment complaints reach an all-time high.

The competency of everyone from the training developers to the Director of Learning and Development is brought into question. The course is re-reviewed and found to be of exemplary quality, covering all of the issues now plaguing the company from unwanted advances through sexual innuendos. What could have gone wrong? In a face saving move the LOD director is fired to at least demonstrate the company’s concern over the problem to the shareholders.

But the real reason the training was unsuccessful may never be dealt with because it was overlooked or not even considered. The truth may have been that people were focused on their upcoming vacations and their motivation for completing the training was “time based” and not “comprehension based”.

Simple statistical correlation analysis holds the key to identifying those factors, sometimes hidden, which influence or predict the success or failure of training initiatives.

If we are going to take online education seriously it’s time to connect the “data dots” – and not rely on simple causation theory.

How do You know you NEED it?

How does a company define what is most important to their organization when it comes to web-based training?

I recently returned from a elearning development capabilities presentation where the majority of the client’s questions were about Project Management.

Their questions weren’t about training efficacy or how to determine the level of impact of their training. In fact, the training to be developed was very “PowerPointy” in nature – page turning modules without even supportive graphics or a hint of interactivity. 

The primary concern was what tool would be used to track the development of the courses? as if the value of the training initiative was bound to the delivery timeline.

It’s interesting that people choose to evaluate something by using “comfortable criteria” – referencing the familiar rather than the unfamiliar. It’s certainly understandable, but probably not most practical.

Targeting project management tools as a way of evaluating elearning development competency is like measuring the quality of the programmer by the cleanliness of his desktop. In a way the truth is more likely the inverse – the messier the desk the better the programmer.

I think it’s time for us to become more uncomfortable. Begin to consider practical methods of evaluating what is important – don’t rely on the simple or familiar.

Look at impact, effectiveness, efficacy, experience, approach, understanding, engagement and comprehension.

These are the true evaluators of educational success, and if the results make you uncomfortable with your instructional approach and efforts to date, then it’s worth the effort.

It’s time to take off the rose colored glasses.

Our Solution: LiMS®

Data flows in two directions in web-based training modules. Content is presented to the student and the learner reacts to the content. Both activities generate data.

The activities of the learner represent the measureable data points of web-based training.

Learner actions may indicate levels of:

  • comprehension or completion
  • interest or disinterest
  • engagement or disassociation
  • challenge or success

– for the student as well as for the course.

This data is the “dark matter” of elearning – the 98% that has yet to be seen but which represents more value than what is presently being considered.

We’ve been developing a technology over the past few years that captures learner actions to a database, and then analyzes these actions in real time. This includes the:

  • sequence of actions
  • time considered in performing actions
  • time between actions
  • time required to conclude actions
  • time taken to reconsider actions
  • and more

These captured actions, once analyzed, provide a rich behavioral profile of the learner’s experience while at the same time identifying their learning preferences and tendencies.

In addition an efficacy profile is constructed for each course so the course may be evaluated for challenges, competency and effectiveness.

By returning learner behavioral and course performance data as Business Intelligence we help companies mine and review the return on investment in online training by using a suite of rich analytical web-based dashboards.

As businesses realize that richer training experiences return more valuable data, they will give their Instructional Designers the resources AND latitude to develop the type of courses that will better educate students based on measurable datasets.

The science behind DATA and ANALYTICS can begin to move web-based education beyond being “faith based” and into the realm of measurable efficacy.

Next: How Do You KNOW You Need It?