googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Parkin Space: Get a Strategy

default-16x9

Godfrey ParkinProblems co-ordinating your evaluation? You need a "learning evaluation bible". Godfrey Parkin explains why and how to put it together.


Why do we spend so much money, and a substantial amount of time, on training? What is the point? Do we just do it because we have always done it, or because everyone else is doing it? Is it because we just like our people to be smarter? Or are we expecting that training will somehow help our company perform better? If it is the latter, how do we know what impact we are having?

I always find it a little disturbing when I come across yet another major corporation that does not have a defined learning evaluation strategy, or that has a strategy which everyone ignores. In fact, I’d say that nine out of 10 companies that I have worked with do not take learning evaluation seriously enough to have formalised policies and procedures in place at a strategic level.

A learning evaluation strategy sets out what the high-level goals of evaluation are, and defines the approaches that a corporation will take to make sure that those goals are attained. Without an evaluation strategy, measuring the impact and effectiveness of training becomes something decided in isolation on an ad-hoc, course-by-course basis.

Without an overall strategy to conform to, instructional designers may decide how and when to measure impact and what the nature of those measures will be, and will use definitions and methodologies that vary from course to course and curriculum to curriculum. When you try to aggregate that evaluation data to get a decent picture of the impact of your overall learning activity, you find that you are adding apples to oranges.

Most companies have done a fairly good job of standardising Level One evaluations of learner attitudes (smile sheets). Most also collect basic data that let them track activity such as numbers of learners or course days. But these are merely activity-based measures, not performance-based measures. They tell you little about the quality or impact of the training being provided.

Once you start to look at Level Two and up, each course tends to run its own evaluation procedures. In the absence of strategic guidelines or policies, those evaluation procedures can be token, invalid, meaningless, or inadequate – if they exist at all. Even a good instructional designer with a good grasp of evaluation practice may structure measurement measures that, while superb within the context of the specific course, are impossible to integrate into a broader evaluation picture. And the more individual courses require post-training impact measurement, the more irritating it becomes for learners and their managers.

There are many approaches to measuring the impact of a company’s investment in learning that go beyond course-level evaluation. In fact, for the bigger issues, the individual course or individual learner are the least efficient points of measurement. You may decide, for example, that surveying or observing a sample of learners is more efficient than trying to monitor them all; you may decide that a few focus groups give you more actionable feedback than individual tests or questionnaires; you may choose to survey customer attitudes to measure the impact of customer service training, rather than asking supervisors their opinions; or you may opt to select a few quantifiable data points such as sales, number of complaints, production output per person, or staff turnover as key indicators of training success. Your strategy would lay out, in broad-brush terms, which of these approaches would be used

A learning evaluation strategy is not enough, of course. You have to make sure that all of those involved in training design, implementation, and analysis understand the strategy and are able to implement it in their day-to-day work. I have found that the best tool you can give people is a company-specific “learning evaluation bible” that not only lays out the bigger picture, but also provides common definitions, norms, standards, baselines, and guidelines for developing and applying measurement instruments, and for interpreting the resulting data. (I used to call this a Learning Evaluation Guide, but the acronym was an open invitation for way too many jokes). This document should be a practical guide, rich in examples and templates, that makes it easy for everyone to conform, at a course, curriculum, or community level. The last thing the bible should be is a four-binder bureaucratic manual that looks like it was produced by an EU subcommittee.

Without an evaluation strategy, we are left floundering every time someone asks what our return on investment (ROI) on training is. I agree that calculating ROI is problematic, especially at the individual course level, and is often unnecessary. But if you are spending millions on completely revamping your sales-force training curriculum, you’d better be thinking about how to measure ROI, and build those measurement requirements in up front. You would not invest hundreds of thousands in an LMS unless you were convinced that the investment would bring an acceptable return, and you naturally will want to measure how well the investment performs against plan.

At a departmental level, an evaluation strategy helps us to answer that awkward ROI question, not with defensive rationalisations, but with coherent, consistent evidence that supports the contention that our training investment is indeed achieving its desired results.

* Read more of Godfrey Parkin's columns here.