No Image Available

TrainingZone

Read more from TrainingZone

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Evaluating Learning: An Impossible Goal? By Annie Hayes

default-16x9

Question Mark
Earlier this year, Martyn Sloman, Adviser, Learning, Training and Development at the CIPD created quite a storm when he made the assertion to TrainingZONE's members that it is no longer necessary to bother with evaluation; Annie Hayes,looks at why the debate over return on investment continues to cause sparks to fly.


“If you’re properly aligned to the business needs and the organisation recognises the value of the training and development there shouldn’t be any need to be obsessed with the figures after the event,” Sloman commented in an exclusive interview for TrainingZONE in March.

"If you’re properly aligned to the business needs and the organisation recognises the value of the training and development there shouldn’t be any need to be obsessed with the figures after the event."

Martyn Sloman, Adviser, Learning, Training and Development at the CIPD.

And according to the Chartered Institute of Personnel and Development’s (CIPD) own survey findings, despite its hype, evaluation isn’t actually happening in the workplace either.

Whilst out of the nine in ten respondents who use some form of evaluation to demonstrate the value of their learning, training and development activities, more complex evaluation activities tend to be applied to far fewer training events – 28% say they evaluate to Kirkpatrick’s level two for at least 75% of training events, whereas only 9% evaluate a similar proportion of training events to level four.

Donald Kirkpatrick’s model developing in the late 1950s is as follows:

  • Level 1 – Reaction – what is the reaction of the learner to the learning experience?

  • Level 2 – Learning – what has the learner actually learnt as a result of the learning experience?

  • Level 3 – Behaviour – to what extent have the behaviours of the learner changed as a result of the learning experience – sometimes referred to as transfer of learning to the workplace?

  • Level 4 – Results – how much better is the organisation performing as a result of the learner’s experiences in the learning programme?

So why are trainers still seemingly obsessed with evaluation and how has it moved beyond the, “How was the training course?” “Ok, thanks” stage of assessment?

Iain Thompson, a Fellow of the CIPD and Managing Director of Squared Circle Consulting writing for the CIPD says that there are two schools of thought when it comes to evaluation – those that advocate a scientific, quantitative and conclusive analysis, and those who believe in the value of subjective, qualitative and action-oriented exploration. So which holds the most value?

The scientific approach:
According to Thompson, the scientific approach supports ROI analysis, use of experimental and control groups, and, above all, the elimination of extraneous or even contributing variables.

“This is mainly because they want proof of the value of training itself (and, possibly, to control or curtail its costs if they are high in comparison to other options).”

The problem with this approach says Thompson is that what you essentially get is a snapshot about training taken at an arbitrary point.

And Dr Jo Cheesman, Partnership and Business Development Manager for Academee learning solutions says that learning evaluation needs to go way beyond learners giving a training course marks out of ten, in essence what Thompson is pointing to when he refers to measuring the impact of training at a single point in time. It needs to demonstrate the impact of learning on the organisation, says Cheesman.

Recent Accenture research found that only 2% of those in charge of learning in organisations provide useful and measurable data such as productivity gains, revenue growth, net income growth, decreased employee turnover and overall industry recognition. Learning professionals need to get better at this, says Cheesman.

The qualitative approach:
The second school of thought, according to Thompson points to those who want to use evaluation to improve training and to reinforce its effect on participants' learning.

“They want to improve the transfer of training back to work (one of the biggest leakages in any training effort). They are ready to use interviews, small group surveys and feedback, and critical incident analysis deliberately to involve participants in renewed or new learning about the original training.

“Subjectivity and the inclusion of variables from activities related to the training (for example, promotion following management training, or changes in wider performance management practices introduced alongside appraisal training) are not a problem, because they assist in the interpretation of the rich data gathered. This school is interested in evidence of ongoing training impact, and what it may point to.”

But the difficulty in executing this type of ‘deeper’ evaluation is that many trainers find it difficult. Joe España, MD of Performance Equations says that the reason very little evaluation occurs beyond Kirpatrick’s level one (In 2004, 74 per cent of US companies evaluated training at Kirkpatrick’s level one; 31 per cent at level two; 14 per cent at level three; and only 8 per cent at level four) is that it’s difficult, trainers don’t know how to go about doing it and there is no commitment or desire in the business for it, even if there is the management information at hand in the company to conduct a study.

Going back to basics:
Trainer Godfrey Parkin believes that because of the confusion surrounding the appropriate way forward for evaluation it’s generally not worth the time or effort.

“In my experience, evaluation within organisations generally is not worth the time and effort that goes into it. Evaluation is generally poorly conceived and executed, often measuring the wrong things in the wrong way, typically subject to significant errors in interpretation, rarely producing actionable or meaningful information, and hardly ever being adequately communicated to decision-makers.

“On that basis alone, I’d agree that evaluation, as it is normally carried out, should simply be terminated. OK, keep scaled-back smile sheets as an ego stroke for classroom trainers, but ditch the rest.”

"In my experience, evaluation within organisations generally is not worth the time and effort that goes into it. Evaluation is generally poorly conceived and executed, often measuring the wrong things in the wrong way, typically subject to significant errors in interpretation, rarely producing actionable or meaningful information, and hardly ever being adequately communicated to decision-makers."

Trainer, Godfrey Parkin.


In essence, trainers need to start from the beginning and examine the reasons why evaluation is taking place in the first place to come up with an appropriate method of conducting a meaningful assessment.

Fred Nichols, asked these questions with impact some time ago, back in the early 1990s:

“Evaluate? Evaluate what? Training? What do we mean by training? What’s to be evaluated? A particular course? The trainees? The trainers? The training department? A certain set of training materials? Training in general?

"More to the point, why evaluate it? Do we wish to gauge its effectiveness, that is, to see if it works? If so, what is it supposed to do? Change behaviour? Shape attitudes? Improve job performance? Reduce defects? Increase sales? Enhance quality?

"What about efficiency? How much time does the training consume? Can it be shortened? Can we make do with on-the-job training or can we completely eliminate training by substituting job aids instead?

"What does it cost? Whatever it costs, is it worth it? Who says? On what basis? What are we trying to find out? For whom?”

According to Martin Schmalenbach, who has been enhancing performance through change and training and development for more than 10 years in organisations such as the RAF, local government, through to manufacturing and financial services the very fact that evaluation seems to mean different things to different people is at the heart of the confusion.

Models abound as do papers on the subject but there’s no getting away from the statistics which show, according to the CIPD that problems proving the value of training persist – 80% report that their training activities are delivering greater value to the business than they are able to demonstrate with the most commonly cited barriers to effective evaluation being a lack of resources to undertake lengthy evaluations exercises (76%) and a lack of time (67%). And until trainers get to grips with finding the best methods then these problems will continue to be swept under the carpet.


Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!